<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        <title>
            <![CDATA[ Code Design ]]>
        </title>
        <description>
            <![CDATA[ Drive Predictability Through Software Design ]]>
        </description>
        <link>https://oncodedesign.com</link>
        
        <lastBuildDate>Wed, 22 Apr 2026 01:19:24 +0300</lastBuildDate>
        <atom:link href="https://oncodedesign.com" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        <item>
            <title>
                <![CDATA[ Code Design for Predictability: Why You Should Hide the Frameworks ]]>
            </title>
            <description>
                <![CDATA[ Software projects rarely collapse because of lack of talent. They collapse because complexity grows faster than it is controlled.

When technical failure happens, the root cause is often unmanaged complexity.

You have seen it:
- Small changes become expensive.
- Adding developers does not increase speed.
- Deadlines move again ]]>
            </description>
            <link>https://oncodedesign.com/blog/code-design-for-predictability-why-you-should-hide-the-frameworks/</link>
            <guid isPermaLink="false">699c6dbd9d17d000019b1751</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 02 Mar 2026 07:22:39 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2026/02/KD2602---Control-Complexity-and-Size-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p></p><p>Software projects rarely collapse because of lack of talent. They collapse because complexity grows faster than it is controlled.</p><p><strong><em>When technical failure happens, the root cause is often unmanaged complexity.</em></strong></p><p>You have seen it:<br> - Small changes become expensive.<br> - Adding developers does not increase speed.<br> - Deadlines move again and again.<br> - The system becomes difficult to reason about.</p><p>In some organizations, four engineers deliver in two years what thirty could not deliver in six. The difference is not effort. It is controlled structure.</p><p>Complexity does not disappear on its own. It must be designed under control.</p><hr><h2 id="structure-creates-predictability">Structure Creates Predictability</h2><p>The primary mechanism for controlling complexity is structure.</p><p>A system that follows consistent patterns becomes understandable. When code design enforces structure, change becomes localized. Impact becomes predictable.</p><p>Well-defined structure limits the surface area of change. It encapsulates complexity instead of letting it spread across the system.</p><p>When constraints are built into the codebase, patterns emerge naturally. Those patterns reinforce maintainability, reuse, and long-term stability.</p><p>Predictability is not an accident. It is the outcome of disciplined Code Design.</p><p>In my work, I rely on three techniques to enforce structure and predictability:</p><ul><li>Hide the Frameworks</li><li>Depend on Contracts, not Implementations</li><li>Apply Consistent Design Patterns across Services</li></ul><p>This article focuses on the first: hiding the frameworks.</p><h2 id="hide-the-frameworks">Hide the Frameworks</h2><p>Hiding frameworks behind application-specific abstractions is one of the most powerful Code Design techniques for long-term scalability.</p><p>It strengthens structure.<br>It improves predictability.<br>It protects your Infrastructure from uncontrolled drift.</p><p>There is an inherent conflict in software reuse.</p><p>Framework providers design generic APIs that fit as many use cases as possible. Broader adoption means greater success.</p><p>Application teams, on the other hand, need APIs tailored to their specific domain and delivery model. They care about their product, not universal flexibility.</p><p>By placing your own abstractions between the application code and external libraries, you resolve this tension.</p><p>Your application no longer depends directly on ASP.NET, Entity Framework, RabbitMQ, or a DI container. Instead, it depends on interfaces designed specifically for your domain.</p><p>This gives you control over:</p><ul><li>The allowed patterns</li><li>The constraints</li><li>The conventions</li><li>The architectural rules</li></ul><p>Instead of allowing each developer to use a framework differently, you enforce consistent usage patterns across the entire system.</p><p>Framework changes remain isolated. Volatility does not propagate through your codebase.</p><p>You move from framework-driven development to structure-driven Code Design.</p><h2 id="why-this-matters-even-more-with-agentic-ai">Why This Matters Even More with Agentic AI</h2><p>This approach becomes even more important when adopting Agentic AI in your development workflow.</p><p>AI agents generate code based on patterns and instructions. If your codebase exposes raw framework APIs everywhere, agents will replicate inconsistency.</p><p>When frameworks are hidden behind structured Infrastructure and clear contracts, you give agents:</p><ul><li>A constrained surface area</li><li>Clear rules to follow</li><li>Stable abstractions</li><li>Explicit architectural boundaries</li></ul><p>These become enforceable instructions inside your agent definitions.</p><p>Instead of generating ad-hoc framework calls, agents generate code aligned with your architectural patterns.</p><p>Structure enables feedback.<br>Infrastructure enables guardrails.<br>Predictability becomes systematic.</p><p>Without structure, Agentic AI amplifies chaos.<br>With structure, it amplifies disciplined Code Design.</p><h2 id="abstract-the-data-access">Abstract the Data Access</h2><p>Consider data access with Entity Framework.</p><p>EF supports multiple usage patterns: stateless vs stateful contexts, read-only optimizations, change tracking, concurrency strategies, and more.</p><p>Different scenarios require different patterns.</p><p>If developers use EF directly inside use cases, inconsistency emerges quickly. Query logic spreads. Data access rules diverge. Cross-cutting concerns multiply.</p><p>Instead, define your own repository abstractions. Implement them using EF internally.</p><p>Now:</p><ul><li>All LINQ queries flow through a controlled Infrastructure layer</li><li>Read-only and read-write patterns are clearly separated</li><li>Concurrency strategies are standardized</li><li>Data access patterns are consistent across services</li></ul><p>This centralization enables extensibility without rewriting business logic.</p><p>Need multi-tenancy with a TenantID discriminator?<br>Extend the Infrastructure layer to append tenant filters automatically.</p><p>Need auditing, authorization, or logging?<br>Implement them once inside the data access Infrastructure.</p><p>The use cases remain untouched.</p><p>This is scalable Code Design.<br>This is structured Infrastructure.</p><h2 id="abstract-the-messaging">Abstract the Messaging</h2><p>Messaging systems such as RabbitMQ or Kafka introduce their own complexity.</p><p>Concepts like exchanges, bindings, queues, routing keys, acknowledgments, and configuration strategies are powerful but generic.</p><p>They are designed for a wide range of use cases:</p><ul><li>Pub-sub</li><li>Fire-and-forget commands</li><li>Request-reply</li><li>High throughput</li><li>High reliability</li><li>Distributed scale</li></ul><p>But your system likely uses only a subset of these patterns.</p><p>Instead of exposing raw messaging APIs to application code, define your own abstractions:</p><ul><li>PublishEvent</li><li>SendCommand</li><li>SubscribeToEvent</li><li>HandleCommand</li></ul><p>Behind those abstractions, configure the message bus consistently.</p><p>Now:</p><ul><li>All services follow the same messaging patterns</li><li>Reliability and throughput strategies are standardized</li><li>Infrastructure configuration is centralized</li><li>Cross-team predictability increases</li></ul><p>Messaging becomes part of your Infrastructure, not a distributed design decision.</p><p>Patterns become explicit.<br>Structure becomes enforced.</p><p>This is critical when scaling delivery teams.</p><h2 id="application-infrastructure">Application Infrastructure</h2><p>All these abstractions form what I call Application Infrastructure.</p><p>Application Infrastructure is a structured layer that sits between:</p><ul><li>Application use cases</li><li>External frameworks and libraries</li></ul><p>It is feature-agnostic.<br>It does not implement business logic.<br>It defines how business logic interacts with technical concerns.</p><p>This Infrastructure layer establishes:</p><ul><li>Conventions</li><li>Constraints</li><li>Patterns</li><li>Architectural guardrails</li></ul><p>It shapes how Code Design is executed across the system.</p><p>When done correctly, Infrastructure becomes the foundation of predictability.</p><p>Developers follow established patterns instead of inventing new ones.<br>Architectural decisions are embedded in code structure.<br>AI agents operate within clearly defined boundaries.</p><p>This is how structure scales.</p><hr><h2 id="key-takeaways">Key Takeaways</h2><p>Hiding frameworks is not about rejecting tools. It is about reclaiming control over them.</p><p>By introducing structured Infrastructure between your application and external libraries, you:</p><ul><li>Reduce uncontrolled complexity</li><li>Enforce consistent patterns</li><li>Improve long-term maintainability</li><li>Increase delivery predictability</li></ul><p>In an Agentic AI era, structured Code Design becomes even more critical. Clear abstractions and consistent patterns give agents the boundaries they need to generate aligned, reliable output.</p><p>Complex systems require structure.<br>Structure enables predictability.<br>Predictability enables scalable delivery.</p><p>That is the real value of hiding the frameworks.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Designing Resilient Software Systems for the Energy Sector ]]>
            </title>
            <description>
                <![CDATA[ Resilience is the primary requirement for software systems in the energy sector. These systems support critical operations such as grid balancing, energy trading, risk management, and asset management. Failure is not an option.

Energy systems must operate continuously. Orders must always be processed. Data loss is unacceptable. Business processes must ]]>
            </description>
            <link>https://oncodedesign.com/blog/designing-resilient-software-systems-for-the-energy-sector/</link>
            <guid isPermaLink="false">697773d23ddf9100011b08ad</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 02 Feb 2026 08:15:04 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2026/01/KD2601---Designing-Resilient-Systems-in-Energy-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>Resilience is the primary requirement for software systems in the energy sector. These systems support critical operations such as grid balancing, energy trading, risk management, and asset management. Failure is not an option.</p><p>Energy systems must operate continuously. Orders must always be processed. Data loss is unacceptable. Business processes must remain visible and auditable at all times.</p><p>Most of these systems operate in heavily regulated environments and integrate directly with critical infrastructure. As a result, they are often deployed in private clouds or on-premises data centers, where reliability, security, and compliance take priority over raw scalability.</p><hr><h2 id="architectural-trade-offs-in-energy-systems">Architectural Trade-offs in Energy Systems</h2><p>Designing resilient systems is largely about making the right trade-offs.</p><p>In the energy domain, Reliability and Consistency are usually prioritized over performance metrics such as latency or throughput. This often means:</p><ul><li>Accepting higher latency to support retries, timeouts, and fault tolerance</li><li>Investing in redundant infrastructure, geographic replication, and failover mechanisms</li><li>Adding architectural complexity through circuit breakers, health checks, and recovery logic</li><li>Choosing strong consistency models, even if they introduce temporary unavailability</li></ul><p>Applying these constraints uniformly across all services would lead to systems that are slow, rigid, and expensive to evolve. That approach does not scale well over the Long-Term.</p><p>A more effective strategy is to clearly separate responsibilities. Some services are designed as the system’s source of truth, optimized for reliability and consistency. Others prioritize responsiveness and usability, accepting eventual consistency to improve user experience and integration performance.</p><p>This balance is essential for Predictability, both technically and financially.</p><h2 id="structuring-systems-through-service-categories">Structuring Systems Through Service Categories</h2><p>A key principle in Code Design is reducing complexity through clear structure. Categorizing services helps enforce architectural boundaries, encourages reuse, and simplifies decision-making.</p><p>Common service categories include:</p><p><strong>Core Services</strong><br>These services focus on Reliability and Consistency. They typically rely on:</p><ul><li>Relational databases</li><li>Reliable, durable messaging</li><li>Workflow engines for resilient execution of business processes</li></ul><p>They represent the authoritative state of the system and are critical for regulatory and operational correctness.</p><p><strong>Support Services</strong><br>These services are optimized for availability and responsiveness. They often use:</p><ul><li>NoSQL databases</li><li>Event-driven communication or lightweight messaging</li><li>Data models optimized for fast reads or high-volume ingestion, even if data is not always fully up to date</li></ul><p>Their role is to shield users and external systems from the complexity and latency of core services.</p><p><strong>Integration Gateways</strong><br>Integration services handle communication with external parties such as Power Exchanges, TSOs, or asset control systems.<br>Their challenges include:</p><ul><li>Rate limiting and security</li><li>Retries, circuit breakers, and fault isolation</li><li>Protocol mismatches, data model differences, and data quality issues</li><li>Monitoring and observability</li></ul><p>Explicitly isolating these concerns prevents external complexity from leaking into the core system.</p><h2 id="redundancy-as-the-foundation-of-resilience">Redundancy as the Foundation of Resilience</h2><p>Resilience is achieved through redundancy at both the compute and storage levels.</p><p>With <strong>redundant compute</strong>, each service runs multiple instances simultaneously. If one instance fails, others continue processing requests or handling workloads. Load balancers and controllers that automatically restart failed instances are essential for maintaining availability.</p><p>With <strong>redundant storage</strong>, data is replicated across disks, nodes, or locations. This protects against hardware failures and ensures that data remains accessible even when components fail. The same guarantees must apply not only to databases, but also to queues and long-lived caches.</p><p>Redundancy is a direct investment in Predictability and delivery On Time and On Budget.</p><h2 id="kubernetes-as-a-platform-for-resilient-systems">Kubernetes as a Platform for Resilient Systems</h2><p>Container orchestration is the standard approach for managing resilient compute and data access services.</p><p>Kubernetes has become a strong choice due to its mature ecosystem, broad adoption, and vendor neutrality. It allows teams to avoid lock-in while deploying consistently across public clouds, private clouds, or on-premises environments.</p><p>Kubernetes also supports geo-distributed deployments. Running clusters across multiple data centers enables traffic routing to healthy locations in case of regional failures, supporting high availability and disaster recovery requirements.</p><h2 id="reliable-messaging-in-distributed-architectures">Reliable Messaging in Distributed Architectures</h2><p>Asynchronous communication is fundamental for building scalable and resilient distributed systems.</p><p>When resilience is the goal, reliable messaging ensures that communication between services remains durable and correct, even in the presence of failures. This typically involves:</p><ul><li>Persistent queues</li><li>Explicit acknowledgments</li><li>Retry and delay mechanisms</li><li>Guarantees around message delivery</li></ul><p>RabbitMQ is one common solution, particularly in environments where cloud-managed messaging services are not an option. Running messaging infrastructure inside Kubernetes can offer operational consistency across deployment models.</p><p>However, message brokers introduce significant complexity. They come with their own concepts, failure modes, and tuning requirements. Encapsulating this complexity behind a dedicated messaging component is critical to keep the overall system maintainable over the Long-Term.</p><h2 id="workflow-engines-for-long-running-business-processes">Workflow Engines for Long-Running Business Processes</h2><p>Energy systems are driven by time-bound and stateful business processes. Examples include:</p><ul><li>Submitting bids by fixed deadlines</li><li>Sending production or consumption plans</li><li>Executing activation orders</li><li>Validating and settling imbalance costs</li></ul><p>These processes often span hours or days, involve external systems or user interaction, and must complete reliably despite failures.</p><p>Implementing such workflows directly in application code is risky. Process state would be lost on crashes and could not resume on another node. A workflow engine is required to persist state, handle retries, and provide visibility into execution progress.</p><p>There are off-the-shelf solutions such as Temporal or Azure Durable Functions. In some cases, a custom-built workflow engine is a better fit, as it can focus on the exact needs of the domain while reducing operational overhead.</p><hr><h2 id="final-thoughts">Final Thoughts</h2><p>Resilient software in the energy sector is not the result of isolated technical choices. It is the outcome of deliberate Code Design, clear architectural boundaries, and a constant focus on Predictability.</p><p>By structuring systems around service categories, embracing redundancy, and investing in reliable messaging and workflow execution, organizations can build platforms that evolve safely over the Long-Term and consistently deliver On Time and On Budget.</p><p>This is what enables energy systems to remain trustworthy, adaptable, and ready for the future.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ How Delivery Architecture Enables Predictable Software Projects ]]>
            </title>
            <description>
                <![CDATA[ Learn how to design software projects for predictable delivery using Delivery Architecture. Align Code Design, System Design, and Project Design to control complexity, reduce risk, and deliver on time and on budget. ]]>
            </description>
            <link>https://oncodedesign.com/blog/how-delivery-architecture-enables-predictable-software-projects/</link>
            <guid isPermaLink="false">693fde96f131c3000170bc36</guid>
            <category>
                <![CDATA[ DeliveryArchitecture ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 18 Dec 2025 12:25:03 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/12/KD0507---Predictable-Delivery-in-Software-Projects-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>In software development, predictable delivery—hitting deadlines and staying within budget—is often seen as an exception rather than the rule. But it doesn’t have to be that way. The key lies in a deliberate <em>Project Design</em> approach that combines engineering discipline with architectural foresight.</p><p>This article outlines the principles and techniques I apply when building predictability into complex software projects. These are part of a broader <em>Delivery Architecture</em> mindset, where <em>Code Design</em> and <em>System Design</em> align with project execution to reduce uncertainty and increase control.</p><hr><h2 id="why-predictable-delivery-is-hard">Why Predictable Delivery Is Hard</h2><p>Software systems are inherently complex. Requirements shift, understanding deepens as work progresses, and visibility into progress can be limited. Many teams plan based on gut feeling or past experience, which breaks down quickly in larger or evolving projects.</p><p>Predictable outcomes don’t come from hope—they come from systems thinking and structured design. Just as you wouldn’t construct a building without blueprints, you shouldn’t start delivery without an architecture that supports it.</p><hr><h2 id="key-practices-for-predictable-software-delivery">Key Practices for Predictable Software Delivery</h2><p>Below are the core practices that I use to bring predictability into the delivery process, grounded in both <em>Code Design</em> and <em>Project Design</em> principles.</p><h3 id="1-map-work-as-a-dependency-network">1. Map Work as a Dependency Network</h3><p>Project activities should never be treated as a flat backlog. Each task interacts with others—some are prerequisites, others rely on shared resources or outputs. The first step in a reliable <em>Delivery Architecture</em> is to model your project as a graph of dependent activities.</p><p>By using an Activities Network (or Dependency Graph), you gain visibility into how tasks relate. This enables float analysis and critical path estimation—tools that are essential for accurate forecasting and risk management.</p><p>Dependencies may be:</p><ul><li><strong>Technical:</strong> A feature needs a shared service or API.</li><li><strong>Resource-based:</strong> A specialist is needed in multiple places.</li><li><strong>Output-based:</strong> One component generates input for another.</li></ul><p>This model becomes the foundation of your project planning and informs how teams are aligned and work is scheduled.</p><h3 id="2-stable-architecture-enables-predictable-change">2. Stable Architecture Enables Predictable Change</h3><p>A resilient <em>System Design</em> should not shift every time new features are introduced. Changing the architecture midway through a project is expensive and disruptive. A good <em>Delivery Architecture</em> assumes the system will evolve functionally without needing to be structurally redesigned.</p><p>To achieve this, avoid feature-based decomposition. Instead, design around <em>volatility</em>. Encapsulate areas that are expected to change frequently, so that those changes don’t ripple through the entire system.</p><p>This kind of separation makes change measurable and impact easier to assess—one of the cornerstones of predictable delivery.</p><h3 id="3-build-in-stable-increments">3. Build in Stable Increments</h3><p>Software delivery should progress incrementally. Once a module, service, or layer is complete, it should be locked unless there's a clear justification to revisit it. Otherwise, the project gets trapped in a loop of rework, and forward momentum stalls.</p><p>In <em>Code Design</em>, this means being intentional about which parts of the system are stable foundations, and which are extensions or configurable layers.</p><p>The analogy with construction applies: you pour the foundation, then build the structure floor by floor. You don’t pour the foundation, finish the kitchen, tear it down to redo the plumbing, and then go back to the basement.</p><p>You can still stagger work for speed, but components must be layered properly. For example, in real-world construction, while upper levels are being poured, the lower ones are already being finished.</p><p>Design, however, is cheap to iterate—<em>before</em> coding begins. That’s why <em>System Design</em> must be done interactively and iteratively, allowing multiple passes to validate assumptions before implementation begins.</p><h3 id="4-don%E2%80%99t-accumulate-bugs">4. Don’t Accumulate Bugs</h3><p>Defects introduce chaos into schedules. A single unresolved bug can have cascading effects on other teams, timelines, and budgets.</p><p>The best way to control this uncertainty is to adopt a zero-tolerance policy for defects. Every bug is addressed immediately—not necessarily fixed right away, but fully understood and scoped. The unknown is the real problem, not the defect itself.</p><p>In a well-structured <em>Delivery Architecture</em>, bugs are treated as blockers to progress and eliminated as soon as they are found.</p><h3 id="5-use-estimate-ranges-not-wishful-thinking">5. Use Estimate Ranges, Not Wishful Thinking</h3><p>Estimates are necessary, but they should serve the plan—not the ego. The goal is to produce <strong>accurate</strong>, not overly precise, estimates.</p><p>Focus on activity duration buckets (5, 10, 15, 20 days). If something takes more than that, it should be broken down. This allows for averaging effects, where some overruns are offset by faster-than-expected completions.</p><p>The structure of your activity network will ultimately shape your delivery timeline more than individual estimates. That’s why detailed network modeling is a critical part of <em>Project Design</em>.</p><h3 id="6-track-with-engineering-discipline">6. Track with Engineering Discipline</h3><p>Planning is only half the equation. Without tracking, predictability fades fast.</p><p>Use your activities network as a live reference. Update it based on progress. Recalculate float and critical paths weekly. Redefine “done” as “reviewed and usable by the next activity.”</p><p>This tracking method enables timely corrective actions and gives real-time insight into how far off-plan you are and why.</p><p>A <em>Delivery Architecture</em> approach treats the project like a system—observe, measure, and adjust continuously.</p><hr><h2 id="conclusion-design-execution-predictability">Conclusion: Design + Execution = Predictability</h2><p>Predictable delivery doesn’t come from luck or rigid processes. It comes from smart <em>System Design</em>, structured <em>Code Design</em>, and thoughtful <em>Project Design</em>—all aligned under a cohesive <em>Delivery Architecture</em>.</p><p>It also depends on tight collaboration between the architect and the project manager. The architect ensures the system is designed for change without disruption. The PM ensures the plan adapts to reality without losing control.</p><p>When design and execution stay in sync, delivery becomes something you can count on—not just hope for.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Coding with Copilot on Top of Application Infrastructure ]]>
            </title>
            <description>
                <![CDATA[ AI coding works best on top of strong Application Infrastructure. With clear structure, strict boundaries, and consistent design rules, Copilot and AI Agents generate cleaner, more predictable code. Architecture guides the AI, not the other way around. ]]>
            </description>
            <link>https://oncodedesign.com/blog/coding-with-copilot-on-top-of-application-infrastructure/</link>
            <guid isPermaLink="false">692026838cf440000114cd62</guid>
            <category>
                <![CDATA[ Application Infrastructure ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Fri, 21 Nov 2025 15:31:56 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/11/KD0507---AI-Coding-on-Top-of--Application-Infrastructure-blog-1.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>AI support in software development becomes truly effective when it operates inside a well-designed structure. When an AI Agent works on top of a solid Application Infrastructure, it benefits from the same clarity, consistency, and predictability that help developers move faster and make fewer mistakes.</p><p>A structured architecture removes ambiguity. It gives both humans and AI a clear model of what good code looks like. In my approach to Code Design, the Application Infrastructure defines strict boundaries and provides the building blocks that guide the implementation. These constraints make it easy to follow the intended Software Design and harder to introduce patterns that would break Clean Architecture principles.</p><p>Originally, this setup was meant to support developers. The goal was simple: let them focus on implementing features in the problem domain and avoid accidental complexity. But the same idea now applies to Copilot and Agentic AI as well. A strong architectural foundation doesn’t just help teams—it also helps AI generate better, more consistent code.</p><hr><h2 id="application-infrastructure">Application Infrastructure</h2><p>A good Application Infrastructure isn’t about business features. It’s about the technical substrate that keeps the system healthy through clear abstractions, predictable behavior, and enforced boundaries.</p><p>By hiding external frameworks behind project-specific interfaces, the infrastructure shapes how feature code is written. This creates a high level of Structure and Consistency, which directly supports maintainability. It also establishes conventions that show “how things are done here.”</p><p>These conventions translate well into AI instructions. Copilot can follow rules, recognize patterns, and replicate examples with surprising accuracy—especially when the architectural boundaries are explicit.</p><p>Two main goals drive the design of the infrastructure:</p><ul><li><strong>Establish a consistent structure</strong> that guides how features are implemented</li><li><strong>Hide complexity</strong> behind stable abstractions</li></ul><p>Both goals help Copilot produce cleaner and more predictable code. A well-defined API gives it one correct way of completing a task. And a coding environment with fewer choices tends to reduce mistakes.</p><p>One rule remains critical: <strong>AI should not modify the infrastructure itself.</strong><br>That foundation defines the architecture. It sets the boundaries, maintains Clean Architecture principles, and keeps the system stable. Copilot should operate <em>on top</em> of it, not change it.</p><h2 id="copilot-for-feature-implementation">Copilot for Feature Implementation</h2><p>Feature code sits above the Application Infrastructure and depends on it. This is where Copilot shines. Most systems rely on a limited number of core use cases, and many features are just variations of those patterns. With clear constraints in place, an AI Agent can generate feature logic effectively and safely.</p><p>Why offload feature code to Copilot?</p><ul><li><strong>Feature code changes more often</strong> than the infrastructure. Using AI here reduces the cost of change.</li><li><strong>Rich context improves accuracy.</strong> Copilot can use user stories, acceptance criteria, and examples to guide generation.</li><li><strong>The layering enforces separation.</strong> Even if Copilot writes imperfect code, other modules remain unaffected.</li><li><strong>Feature code is less critical.</strong> Reliability and cross-cutting concerns are already handled elsewhere.</li><li><strong>Each feature has a bounded context.</strong> Rules, abstractions, and local conventions prevent architectural drift.</li></ul><p>This combination—strong boundaries and flexible feature logic—creates a productive environment where AI can accelerate development without compromising design quality.</p><hr><h2 id="experiment-on-workshop-labs">Experiment on Workshop Labs</h2><p>In my <em>Application Infrastructure for Clean Architecture</em> workshop, participants work through eight labs designed to show how the structure comes together. The same environment is a great testbed for AI.</p><p>The workshop repo includes foundational components such as <code>AppBoot</code> and <code>DataAccess</code>. They are production-ready building blocks that anyone can adopt, adapt, and test in their own projects.</p><p>For this experiment, I asked Copilot to complete the labs—or heavily assist in them.<br>The setup:</p><ul><li><strong>Visual Studio 2026</strong></li><li><strong>.NET 10</strong></li><li><strong>Agent Mode enabled</strong></li><li><strong>Models used:</strong> GPT-5 mini and Claude Sonet 4.5</li><li><strong>One shared <code>copilot-instructions.md</code> file</strong></li></ul><h2 id="experiment-results">Experiment Results</h2><h3 id="what-copilot-handled-well">What Copilot handled well</h3><ul><li>Understood and respected the dependency rules defined in the infrastructure</li><li>Followed the structure and boundaries described in the instruction file</li><li>Performed well on repetitive or verbose tasks, such as user interaction or logging</li><li>Created new modules correctly when examples were available</li><li>Applied Dependency Inversion and layering principles correctly</li></ul><h3 id="where-copilot-struggled">Where Copilot struggled</h3><ul><li>Data access code required corrections—extra round trips or misuse of abstractions</li><li>DTO generation occasionally hallucinated</li><li>Did not improve or extend the infrastructure (as expected)</li><li>Sometimes produced overly verbose code, easy to clean up but not ideal</li></ul><p>Overall, Copilot performed best when the task lived inside clear boundaries. Predictability came from the architecture—not the model.</p><hr><h2 id="details">Details</h2><h3 id="instruction-file">Instruction File</h3><p>The experiment started by generating a <code>copilot-instructions.md</code> file.<br>There are more advanced ways to approach this—such as model-specific variants with an index—but I used a simpler method to move fast.</p><p>I generated a first draft with ChatGPT-5.1, providing it with my training materials and pointing it to the workshop repo. It produced a solid outline that captured the core ideas of the infrastructure. I refined it further to match the Labs repo and the demo Application Infrastructure.</p><p>This file became the backbone of the experiment, guiding both models through the architectural rules and conventions.</p><p><strong>Some sections to highlight:</strong></p><hr><p><strong>1) Folder &amp; Layering Rules (must follow)</strong></p>
<pre><code>repo-root/
├─ Infra/                         # Application Infrastruture (Application Framework, DataAccess, Logging, Messaging etc) 
│  ├─ AppBoot/                    # dependency injection, modules composition, app startup, plugins dynamic load
│  ├─ AppBoot.UnitTests/          # Unit tests for AppBoot
│  ├─ DataAccess/                 # Hides EF Core, IRepository and IUnitOfWork implementations
├─ Modules/                       # Functionalities grouped by domain (Sales, Notifications, Export etc).
│  ├─ Contracts/                  # Contracts shared between modules (e.g., Events, Messages, DTOs). No logic here!
│  ├─ Sales/                      # Sales module (example)
│  │  ├─ Sales.Services/          # Use-cases implementations, domain services.
│  │  ├─ Sales.DataModel/         # [Example] Entities, DTOs mapped to DB tables. No logic here! (no if, while, logical expressions etc.). NO reference to EF Core!
│  │  ├─ Sales.DbContext/         # [Example] EF DbContext for Sales module
│  │  └─ Sales.Console/           # [Optional] Console UI commands specific to sales module.
│  └─ Notifications/              # Notifications module (example)
│     └─ Notifications.Services/  # Use-cases implementations, domain services.
└─ UI/                            # User Interface layer / Clients   
   └─ ConsoleUi/                  # Console application (CLI)
</code></pre>
<p>I think this helped a lot, because I didn’t get any misplaced files.</p><hr><p><strong>2) Dependency boundaries</strong></p>
<pre><code>- `Infra/*` → implements ports for DB, messaging, HTTP, files, dymamic load of plugins; registers via DI; no domain logic.
- `Modules/Contracts` → **no** references to anything. Only pure DTOs and interfaces. No logic
- `Modules/*` → **no** references to other modules. Only references to **Contracts** and **Infra**.
- `Modules/*/*.DataModel` → **no** logic; only entities/DTOs; no references to EF Core or other frameworks.
- `Modules/*/*.Services` → references **Contracts** and **DataModel**; NO references ot EF Core or other frameworks. Contains domain logic and use-cases.
- `UI/*` → references **Modules/Contracts** and **Infra**; NO references to **Modules/*/Services**. No domain logic.


&gt; **Copilot:** If a change violates these rules, raise an error instead of making the change.
</code></pre>
<p>&nbsp;The cheaper models like “GPT-5 mini” did not raise any errors, but they still followed the rules.</p><p>The more advanced models like “Claude Sonet 4.5” refused to make changes that would violate them.</p><hr><p><strong>3) Registering in DI</strong></p>
<pre><code>- Use `ServiceAttribute` from `Infra/AppBoot` to register services in DI.
- The `ServiceAttribute` decorates the implementation class, specifying the service lifetime and the interface to register.
- Register only interfaces, not concrete classes.
- Example of the `PriceCalculator` class registered as the implementation of the `IPriceCalculator` interface:
</code></pre>
<pre><code class="language-csharp">[Service(typeof(IPriceCalculator), ServiceLifetime.Transient)]
class PriceCalculator : IPriceCalculator
{
    public decimal CalculateTaxes(OrderRequest o, Customer c)
    {
    }
}
</code></pre>
<p>This made a big difference. The Copilot, even with the cheaper models, consistently registered services correctly.</p><hr><p><strong>5) AppBoot Plugin Model</strong></p>
<pre><code>- AppBoot supports dynamic loading of modules as plugins at runtime.
- In `Program.cs` where AppBoot is configured, use `.AddPlugin()` to specify the modules that should be loaded as plugins.
- Load as plugins all assemblies that are not referenced by any other assembly at compile time (i.e., have no incoming `ProjectReference` in the solution).
- Each call to `.AddPlugin()` creates a LoadContext isolated for that module assembly; dependent assemblies passed in the dependency array are loaded into the same LoadContext.
- `AddPlugin()` accepts modules names, which are built by convvention as `{ModuleName}.{AssemblySuffix}`. 
  - The `ModuleName` correspond to the folder name under `Modules/` (e.g., `Sales`, `Notifications`).
  - The `AssemblySuffix` is the assembly name without the module name prefix (e.g., `Services`, `DbContext`).
  - Assembly are named by convention as `{ModuleName}.{AssemblySuffix}` (e.g., `Sales.Services`, `Notifications.Services`, `Sales.DbContext`).
- When a module has dependent assemblies that are not referenced by the assembly that gives the plugin name, specify their names in the `.AddPlugin()` dependency parameter.
    - Example: `.AddPlugin("Sales.Services", new[] { "Sales.DbContext" })` — each string is a simple module name (not a file path or DLL filename).
</code></pre>
<p>This was handled well by the advanced models, but not always by the cheaper ones. Still, this is part of setting up new projects, which wasn’t the focus of my experiment. And it’s easy enough to do manually.</p><hr><p><strong>6) Build for Dev/Debug</strong></p>
<pre><code>- Some plugin assemblies are not referenced by the host or by other projects. These assemblies are loaded dynamically at runtime and must be built for Dev/Debug.
- Ensure those assemblies are included in the Visual Studio build by adding them as build dependencies of the host or plugin root project using the __Project Build Dependencies__ feature in the solution.
    - Steps: right‑click the solution → choose __Project Build Dependencies__ → select the dependent projects (for example, add plugin projects as dependencies of `UI/ConsoleUi` or the plugin root).
    - The selection is saved in the `.sln` file and is not part of individual project files.
- If a plugin has additional assemblies that are not directly referenced, add those dependent projects as build dependencies of the plugin root project as well.
    - Example: `Sales.DbContext` is a dependency of the `Sales.Services` plugin; add `Sales.DbContext` as a build dependency of the `Sales.Services` project so both are built in Debug.

This was not done by Copilot. For some reason it could not edit the Solution File (sln) to setup Build Project Dependencies, nor to add new projects.

Again, not a problem as this is more into setting up the structure, which was not the target of Copilot in my experiment anyhow.
</code></pre>
<p>Copilot did not handle this. It couldn’t edit the solution file to set up project build dependencies, nor could it add new projects.</p><p>Again, not a real issue, since this is about setting up the structure, which wasn’t the focus of the experiment anyway.</p><hr><p><strong>7) Data &amp; Persistence</strong></p>
<pre><code>- `Infra/DataAcces` abstractions only, like `IRepository` or `IUnitOfWork`. Do not use directly EF Core. Do not take hard dependencies to EF Core. 
- Use `IRepository` for read only cases; Get the `IRepository` via DI.
- Use `IUnitOfWork` for transactional operations; Get the `IUnitOfWork` via a factory function (`IRepository.CreateUnitOfWork`).

In general the Copilot didn’t do the best work in using the DataAccess. Simple, but impactful fixes were needed.
However, these instruction helped the more advanced models to get things right more often.
Maybe, I would refine this section more, and add some code examples, things will be better. Those samples are part of the course, so adding them here might help.
</code></pre>
<p>In general, Copilot didn’t do a great job with the DataAccess layer. It needed simple but important fixes.</p><p>The instructions did help the more advanced models get it right more often.</p><p>I might refine this section further and include a few code examples. Those samples are already part of the course, so adding them here would be easy and helpful.</p><hr><p><strong>8) Console UI</strong></p>
<pre><code>- Host project is `UI/ConsoleUi/`.
- Each module has its own subfolder under `Modules/` for console commands (e.g., `Modules/Sales/Console/`).
- The modules do not directly depend `UI/ConsoleUi/`; instead, commands implement interfaces defined in `Modules/Contracts/Console/`.
</code></pre>
<p>This helped a lot. All features accessed through the console (we’re building a CLI) were placed correctly in the corresponding Console project, which only depends on the <code>Contracts</code>.</p><p>The structure was followed consistently.</p><hr><p><strong>9) Files Copilot Must Not Modify</strong></p>
<pre><code>- Any file under `Infra/**`
- Any file under `*/DbContext`
- Any `*.csproj` file

&gt; **Copilot:** If a change is requested in these paths, reply with an alternative that keeps generated/third-party code intact (e.g., partial class, extension method, adapter).
</code></pre>
<p>The advanced models took this very seriously. Claude refused to touch these files and explicitly pointed back to rule nine as the reason. It generated the code in the chat window instead and told me to apply it manually.</p><p>I appreciated that behavior.</p><h3 id="the-labs">The Labs</h3><p>You can find the lab instructions in the repo at this path: <code>.Practice/readme.md</code></p><p>The instructions are structured and clear. They’re written for developers attending the workshop, and they work well for an AI agent too.</p><p>My approach was simple. I prompted Copilot with the description of each lab, one at a time. At the second prompt, I added more details and clarified what needed to be done. Then I asked for a detailed plan. Only after that did I ask it to implement the code.</p><p>Afterwards, I reviewed the generated code and applied fixes where they were quick. In most cases, the fixes were straightforward.</p><p><strong>Lab 1 - Notify IsAlive for the Sales Module</strong></p><p>The first lab is straightforward. It asks you to replicate an existing mechanism in a new module.</p><p>Besides the written instructions, I also provided Copilot with the existing implementation to give it more context. I treated this as a small upfront investment to help it understand the pattern.</p><p>I used the cheap model for this one, and it handled it well.</p><p><strong>Lab 2 - Refactor the Console Application</strong></p><p>This lab required some restructuring. The goal was to use the infrastructure mechanisms to achieve low coupling between the UI and each feature implementation.</p><p>The cheap model struggled here. It didn’t respect the dependency rules and introduced incorrect references.</p><p>I had to refine the prompt several times and revert some of its changes. It didn’t feel like an efficient workflow.</p><p>When I switched to the advanced model, things improved a lot.</p><p>Copilot was very useful for building a clean CLI. This part involved some verbosity and user interaction code, and the model handled it well.</p><p><strong>Lab 3 - Create a Composite Console Application</strong></p><p>This lab focused on using the Composite pattern support built into the infrastructure.</p><p>The goal was to discover all implementations of the IConsoleCommand interface across all modules and build a CLI from them.</p><p>Copilot made this lab very efficient.</p><p>I only needed to break the lab into smaller steps, validate each step, and guide it in the right direction.</p><p>The only downside was the speed. It often took between thirty and seventy seconds to complete a task, which broke my workflow and focus.</p><p><strong>One thing I started to appreciate about Agent Mode is that it builds the solution after each step. If the build fails, it tries to fix the issue on its own. In most cases, it solved it after one or two attempts. A solid plus for autonomy.</strong></p><p>Still, the result was worth it. It generated a nicer CLI than the one I would have had the patience to write by hand.</p><p><strong>Lab 4 - DataAccess in Sales Module</strong></p><p>This lab focused on using the DataAccess component from the infrastructure.</p><p>Copilot handled the read-only part well, but it made several mistakes when adding or changing data. The code worked, but it was far from optimal. There were unnecessary round trips to the database and incorrect use of the IRepository and IUnitOfWork abstractions.</p><p>The fixes were easy, but they required a careful review.</p><p><strong>Labs 5 and 6 - DataAccess Interceptors</strong></p><p>These labs focused on using the interceptors provided by the DataAccess component.</p><p>Copilot had a good grasp of the concept and the structure. It created the right files, placed them in the correct locations, and set up the dependencies properly.</p><p>It was also very helpful with the repetitive work of making all the DTOs implement the IAuditable interface.</p><p>Overall, a great help.</p><p><strong>Labs 7 and 8 - Adding new Modules</strong></p><p>These last two labs are meant to test and recap the understanding of the structure created by the App Infra.</p><p>It asks to add new modules, which do DataAccess and have UI. Plus, it makes you use services from one module into another to prove the loose coupling between modules.</p><p>Again the Copilot was of a great help. I was just using the advance model and by now I also got a good sense on how to build the prompts more efficient.&nbsp;</p><p>I got into a workflow that I was reviewing files while others were generated. This helped not only with the speed, but aslo with not getting out of the flow.</p><p>It proved that it had a good understanding of the structure and the low coupling principles and of the dependency rules.</p><p>I appreciated summaries like:</p><p>Verify Solution Structure</p><p>After adding, your solution structure should look like:</p><pre><code>AppInfraDemo
├── UI
│   └── ConsoleUi
├── Modules
│   ├── Contracts
│   ├── Sales
│   │   ├── Sales.DataModel
│   │   ├── Sales.DbContext
│   │   ├── Sales.Services
│   │   └── Sales.ConsoleCommands
│   ├── Notifications
│   │   └── Notifications.Services
│   ├── Export
│   │   ├── Export.DataModel
│   │   └── Export.Services
│   └── ProductsManagement          ← NEW
│       ├── Products.DataModel      ← NEW
│       ├── Products.DbContext      ← NEW
│       ├── ProductsManagement.Services          ← NEW
│       └── ProductsManagement.ConsoleCommands   ← NEW
└── Infra
    ├── AppBoot
    ├── AppBoot.UnitTests
    └── DataAccess
</code></pre><p>Or</p><pre><code>Architecture Highlights:
Cross-Module Communication:

```
Sales.Services
  ↓ (depends on interface)
IPersonService (in Contracts)
  ↑ (implemented by)
PersonsManagement.Services
```</code></pre><p>These show that it has a good understanding of the structure and gives confidence that the changes will be right.</p><hr><h2 id="conclusion">Conclusion</h2><p>This experiment reinforced a key principle in Code Design: an AI Agent becomes effective when it operates inside a clear structure. When the architecture has well-defined boundaries, stable abstractions, and predictable patterns, Copilot can generate code that aligns with the design instead of fighting it. The Application Infrastructure provides the guardrails, and the AI simply follows them.</p><p>Strong conventions reduce ambiguity. Clean Architecture reduces decision noise. Together they create an environment where AI coding becomes reliable rather than accidental. The agent does not need to “understand” the whole system — it only needs to work within the rules. And when those rules are sharp and consistent, both the developer and the AI move faster with far fewer mistakes.</p><p>In short, structure amplifies the strengths of AI. A disciplined foundation makes Copilot more accurate, more predictable, and more useful. And that combination points toward a future where Agentic AI and good Software Design reinforce each other, rather than compete.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Integration Gateway for External Systems ]]>
            </title>
            <description>
                <![CDATA[ An Integration Gateway isolates your system from external changes and failures while standardizing communication. This article explores a Code Design approach using Contract-First Design, Pluggable Applications, and Clean Architecture for reliable and maintainable software integration. ]]>
            </description>
            <link>https://oncodedesign.com/blog/integration-gateway-for-external-systems/</link>
            <guid isPermaLink="false">6909c6a99effac000194ba5f</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 06 Nov 2025 07:42:08 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/11/KD0506---Integration-Gateway-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <h3 id=""></h3><h2 id="code-design-example-from-a-real-etrm-platform">Code Design Example from a Real ETRM Platform</h2><p>When building large-scale distributed systems, integration with third-party platforms is often one of the most sensitive and critical aspects. These external dependencies can easily become points of failure or sources of complexity if not handled properly. In this article, we explore a Code Design approach for implementing an <strong>Integration Gateway</strong> that helps isolate and standardize external communication.</p><hr><h2 id="the-role-of-the-integration-gateway">The Role of the Integration Gateway</h2><p>The Integration Gateway is a dedicated layer of services responsible for managing all interactions with systems outside our control. These may include TSOs (Transmission System Operators), Power Exchanges, asset control platforms, or data providers.</p><p>These integrations often share common technical requirements—security, monitoring, reliability, fault tolerance, diagnostics, and performance. At the same time, each external API has its own peculiarities. The Integration Gateway is designed to bridge that gap: it speaks the language of each external system while translating everything to a common contract understood by our core platform.</p><p>This approach protects the internal system from changes, errors, and unexpected behaviors coming from external parties.</p><h2 id="real-world-context-technical-design-for-an-etrm-system">Real-World Context: Technical Design for an ETRM System</h2><p>As part of a greenfield development project in the energy sector, I led the software design and implementation of a platform that enables grid balancing entities to:</p><ul><li>Trade ancillary services with TSOs</li><li>Buy and sell energy on Power Exchanges</li><li>Monitor and control production or consumption assets</li></ul><p>The system had to be multi-tenant, support multiple countries and energy markets, and run either as an on-premise solution or in the cloud using a SaaS model.</p><p>The Integration Gateway became a key part of this software design, as we needed to connect to many different external platforms while enabling customers to license only the specific integrations they required.</p><h2 id="technical-design-approach">Technical Design Approach</h2><p>To ensure maintainability and predictability, we applied a mix of Code Design patterns and supporting infrastructure:</p><ul><li><strong>Contract-First Approach</strong></li><li><strong>Pluggable Application Structure</strong></li><li><strong>Runtime Type Discovery</strong></li><li><strong>Clean Architecture Principles</strong></li></ul><p>These decisions shaped both the detailed design and the structure of our codebase, offering strong separation of concerns and promoting reusability.</p><h3 id="contract-first-design">Contract-First Design</h3><p>We defined interfaces at two different abstraction levels:</p><ol><li><strong>Internal Contracts</strong> – describing what our system expects from the external integration (e.g., data or behavior needed from a TSO)</li><li><strong>External Contracts</strong> – defining what each adapter needs to fulfill when connecting to a specific external API</li></ol><p>All contracts were isolated in "Contract Assemblies"—assemblies that include only interfaces, DTOs, and exceptions, without any logic.</p><p>This separation ensured loose coupling and clarity at the design level.</p><h3 id="pluggable-applications">Pluggable Applications</h3><p>Each external system (e.g., each TSO or Power Exchange) had a dedicated adapter packaged as a separate assembly. These acted as plugins.</p><p>The adapters translated between the contracts we defined and the actual APIs provided by the external parties.</p><p>At runtime, different versions of the same integration service (e.g., for TSOs) were deployed, each with its own plugin based on customer-specific configuration.</p><p>This allowed us to maintain a single codebase and dev team per integration type, while deploying multiple services for scalability, fault isolation, and operational independence.</p><h3 id="runtime-type-discovery">Runtime Type Discovery</h3><p>We used a type discovery mechanism to detect and register all plugin implementations at application startup.</p><p>When a service instance launched, it scanned all loaded assemblies and registered relevant implementations into the Dependency Injection container based on naming conventions.</p><p>This enabled us to customize deployments and behaviors without needing to recompile the application.</p><p>A lightweight implementation of this pattern can be found in our GitHub account part of the <a href="https://github.com/onCodeDesign/AppInfra-Training?ref=oncodedesign.com" rel="noreferrer">AppInfra-Training repository</a>.</p><h3 id="clean-architecture">Clean Architecture</h3><p>A core idea behind Clean Architecture is that business logic should be independent of frameworks or external SDKs.</p><p>In this Integration Gateway, the core logic did not reference any third-party libraries. Only the plugin assemblies were allowed to depend on SDKs provided by TSOs or Power Exchanges.</p><p>This separation made the codebase easier to test, maintain, and evolve.</p><hr><h2 id="summary">Summary</h2><p>In distributed platforms that rely on third-party services, the Integration Gateway acts as a protective boundary. It translates diverse external protocols into a common language, making the system resilient and maintainable.</p><p>By applying Contract-First principles, pluggable structures, runtime discovery, and Clean Architecture, you can build integration layers that scale, adapt, and hold up over time. This kind of technical design is essential when building modern platforms that must remain robust in the face of constant external change.</p><p>This example showcases how disciplined software design and clear separation of concerns can support both flexibility and control in real-world systems.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Viewing Software Projects Through an Activities Network ]]>
            </title>
            <description>
                <![CDATA[ Learn how Activities Network Diagrams improve Project Design and Project Management by clarifying dependencies, reducing cost, and increasing predictability of delivery, budget, and quality in complex software projects. ]]>
            </description>
            <link>https://oncodedesign.com/blog/viewing-software-projects-through-an-activities-network/</link>
            <guid isPermaLink="false">68da7caf91bae80001810d93</guid>
            <category>
                <![CDATA[ ProjectDesign ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 02 Oct 2025 08:46:31 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/09/KD0505---Viewing-Software-Projects-as-an-Activities-Network-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>In <strong>Project Design</strong>, one of the most effective ways to plan is by representing activities as a <strong>network of dependencies</strong> rather than as a flat task list. This method provides deeper visibility into how the work should unfold and how changes ripple across the plan.</p><h3 id="a-structured-view-for-project-planning-and-replanning">A Structured View for Project Planning and Replanning</h3><p>When the project plan is designed using such a network, and combined with realistic constraints, it forms a foundation that stakeholders can trust. This approach replaces intuition with a strategy grounded in <strong>Code Design</strong> principles—building confidence, improving <strong>predictability</strong>, and making it easier to manage complexity.</p><p>Even as execution progresses and changes occur, the network perspective enables proactive replanning, better alignment across teams, and clearer communication of impact on scope, <strong>budget</strong>, and <strong>delivery</strong>.</p><p>Simply put: analyzing the Activities Network is a practical tool for planning and managing complex software systems.</p><h3 id="why-use-an-activities-network-diagram">Why Use an Activities Network Diagram?</h3><ul><li><strong>Clarifies Dependencies:</strong> Shows the real order in which activities must be done.</li><li><strong>Improves Project Management:</strong> Plans are based on calculation, not guesswork.</li><li><strong>Supports Change Control:</strong> Makes it easier to understand how new requirements or risks affect progress.</li><li><strong>Improves Communication:</strong> Helps explain structure, scheduling, <strong>cost</strong>, and trade-offs to both technical and non-technical stakeholders.</li></ul><h3 id="building-the-activities-network">Building the Activities Network</h3><p>The starting point is the <strong>System Design</strong>, which outlines all the building blocks of the solution and the dependencies between them, based on core use-case analysis. While valuable, this design view has some limits:</p><ul><li>It is <strong>structural</strong>, not execution-oriented.</li><li>It is <strong>incomplete</strong> for planning.</li><li>It can be <strong>too detailed</strong> to use directly for scheduling.</li></ul><p>To transform it into an Activities Network Diagram:</p><ol><li><strong>List Activities</strong> for each building block.</li><li><strong>Add Non-Development Work</strong> such as requirements, documentation, testing, training, or integration.</li><li><strong>Define Dependencies</strong> between tasks (what must be finished before something else can start).</li><li><strong>Draw the Diagram</strong> as a directional graph.</li></ol><p>When represented as an <strong>Arrow Diagram</strong>—nodes as events and arrows as activities—the network becomes easier to follow and scalable to projects with 100+ activities.</p><h3 id="integration-points">Integration Points</h3><p>In this network, integration events are key. Integrating too many streams at once creates risk. Two activities are enough for an integration; more than three often leads to instability.</p><p>Whenever integration looks complex, it’s worth revisiting the <strong>Project Design</strong> and refining how the system supports core use-cases.</p><h3 id="critical-path-analysis">Critical Path Analysis</h3><p>The <strong>Critical Path</strong> represents the longest sequence of dependent activities, and it defines the absolute shortest time the project can be completed.</p><ul><li>The <strong>critical path duration</strong> is the minimum project timeline possible.</li><li>No project can be finished earlier than its critical path allows.</li></ul><p>To calculate it, activities are estimated in multiples of 5 days (5, 10, 15, 20, 25, 30). Anything longer should be broken down. This keeps estimation consistent and allows the <strong>law of large numbers</strong> to balance over- and under-estimates.</p><p><strong>Critical Path Analysis</strong> is the only reliable way to answer: <em>How long will it take to deliver the system?</em></p><p>Since any delay on the critical path delays the whole project, it must be monitored and recalculated regularly as execution evolves.</p><h3 id="staffing-the-project">Staffing the Project</h3><p>The Activities Network also helps answer a critical <strong>Project Management</strong> question:</p><p><strong>What is the minimum staffing level that ensures the critical path moves forward without interruption?</strong></p><p>This makes it possible to optimize <strong>cost</strong>, manage <strong>budget</strong>, and control <strong>risk</strong> while still ensuring quality and predictable <strong>delivery</strong>.</p><ul><li>Always assign the strongest developers to the critical path.</li><li>Assign the next strongest to near-critical paths.</li><li>Use the network to see the maximum parallelism achievable.</li></ul><p>The analysis also helps with <strong>staffing distribution</strong>:</p><ul><li>Not every resource is needed at the same time.</li><li>Hiring and offboarding monthly is inefficient and costly.</li></ul><p>Instead, staffing should follow four phases: initial ramp-up, enabling activities, peak staffing, and ramp-down. Activities can be shifted within available float to keep this distribution balanced, without delaying the critical path.</p><h3 id="change-management-and-replanning">Change Management and Replanning</h3><p>As the project evolves, changes are inevitable. An Activities Network provides a structured way to measure their impact:</p><ul><li>Dependencies may shift.</li><li>New activities may appear.</li><li>The critical path may change.</li><li>Staffing or scheduling may need to be adjusted.</li></ul><p>This makes it possible to clearly quantify the effect of changes on <strong>budget, cost, quality, and delivery</strong>—and communicate those impacts with confidence.</p><h3 id="conclusion">Conclusion</h3><p>Viewing projects through an Activities Network transforms <strong>Project Management</strong> from uncertain estimation into a predictable, <strong>Code Design</strong>-driven process. It improves <strong>project design</strong>, clarifies dependencies, and helps balance <strong>budget, cost, quality, and delivery</strong>—making it an essential tool for planning and executing complex software systems.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Detailed Technical Design: Bridging Architecture and Code for Predictable Delivery ]]>
            </title>
            <description>
                <![CDATA[ Detailed Technical Design bridges architecture and code. By defining requirements, contracts, and code design upfront, teams gain predictability, reduce rework, and build consistent structure, leading to efficient, maintainable software delivery. ]]>
            </description>
            <link>https://oncodedesign.com/blog/detailed-technical-design-bridging-architecture-and-code-for-predictable-delivery/</link>
            <guid isPermaLink="false">68b0780d51ff2e0001d2180e</guid>
            <category>
                <![CDATA[ technical design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 04 Sep 2025 09:00:46 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/08/KD0504---Detailed-Technical-Design-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>Detailed Technical Design is a vital step in software development.</p><p>You can’t jump directly from <strong>architecture</strong>—which provides the overall <strong>System Design</strong>—into coding. Not if you want reasonable development costs, long-term maintainability, and predictable delivery.</p><p>System Design outlines the system’s services and building blocks, as well as how they interact. Detailed Technical Design, on the other hand, specifies how each of those elements should actually be built.</p><h2 id="why-detailed-technical-design-matters">Why Detailed Technical Design Matters</h2><p>Like any design discipline, its main role is to reduce costs by finding the right solution early. It is always cheaper to test a design (sometimes through a Proof of Concept or custom testing tools) than to fully implement a component only to later realize it doesn’t fit.</p><p>A common misconception is that design “emerges” from code. In reality, engineering follows the opposite direction: first design, then build. Skipping this step usually undermines <strong>predictability</strong>.</p><hr><h2 id="establishing-technical-design-as-a-process">Establishing Technical Design as a Process</h2><p>Technical Design should exist as a clear activity in the development process.</p><p>On the projects I lead, I always emphasize this stage. Usually an architect or senior developer owns the task, since it requires strong analytical skills and deep system knowledge.</p><p>Scrum often misses this step. I’ve seen estimates made on half-understood solutions, riddled with unknowns, or based on assumptions the team didn’t share. Grooming, backlog refinement, or planning meetings rarely leave enough space for the analytical thinking needed here.</p><p>When estimates fail, defects pile up, or rework becomes unavoidable, developers often get blamed. But the real cause is skipping a proper <strong>Technical Design</strong> phase. Running Scrum ceremonies on top of well-defined Technical Designs leads to better quality, higher efficiency, and improved <strong>predictability</strong>.</p><h2 id="what-a-detailed-technical-design-should-cover">What a Detailed Technical Design Should Cover</h2><p>A solid Detailed Technical Design usually answers three key questions:</p><ol><li><strong>What are the requirements of the service/component?</strong></li><li><strong>What contracts define how others interact with it?</strong></li><li><strong>How will those contracts be implemented?</strong> (including internal <strong>code design</strong>, structure, and technologies).</li></ol><h3 id="1-service-requirements"><strong>1. Service Requirements</strong></h3><p>System Design defines services and their interactions to support requirements like scalability, reliability, or security. Each service then has its own responsibilities to meet.</p><p>These need to be written down and validated against the system’s use cases.</p><p>Requirements are rarely tied directly to a feature.<br>Example: Just as a jet’s fuel pump has nothing to do with serving meals, an <strong>IdentityProviderService</strong> has nothing to do with placing an order. Its requirement is clear: provide a security token from username and password.</p><h3 id="2-service-contracts"><strong>2. Service Contracts</strong></h3><p>Strong contracts are the foundation of modular systems. They should be:</p><ul><li>Cohesive</li><li>Logically consistent</li><li>Independent</li><li>Reusable</li></ul><p>The level of detail depends on the project and the team’s experience—from draft interfaces with placeholder functions to fully defined APIs with detailed parameters.</p><h3 id="3-implementation"><strong>3. Implementation</strong></h3><p>The Technical Design should define:</p><ul><li>How contracts are implemented</li><li>Which frameworks and technologies will be used</li><li>How those choices meet the service requirements</li></ul><p>Even for something like a web client—where frameworks set many conventions—the design should specify best practices, exceptions, and customizations.</p><p>In larger systems with many services, keeping a consistent <strong>structure</strong> is crucial. Components should look and feel like part of the same system, which is only possible through consistent design practices.</p><p>The design should also capture:</p><ul><li>Key technical decisions</li><li>Alternatives considered</li><li>The reasoning behind choices</li></ul><h2 id="an-iterative-activity">An Iterative Activity</h2><p>Design is not linear. Each step gets revisited, refined, and validated against core use cases. Mistakes are expected—they’re much cheaper to fix in design than in code.</p><p>Peer review sessions are often invaluable, as a fresh set of eyes helps challenge assumptions and validate ideas.</p><h2 id="deliverables-of-technical-design">Deliverables of Technical Design</h2><p>The output of a Technical Design can include:</p><ul><li>A design document</li><li>Diagrams and interface definitions (sometimes written in code)</li><li>Proofs of Concept to validate design choices</li><li>Small demos (vertical slices) showing how the design applies in practice</li></ul><hr><h2 id="case-study-messaging-component-in-an-energy-project">Case Study: Messaging Component in an Energy Project</h2><p>In one energy system I worked on, the <strong>architecture</strong> defined a <strong>Messaging Component</strong> that had to support both Pub/Sub messaging and Fire-and-Forget commands.</p><h3 id="requirements">Requirements:</h3><ul><li>Message durability, reliability, and deduplication</li><li>Ownership and security</li><li>High performance under load</li><li>Publishing transactions</li><li>Monitoring and diagnostics</li></ul><h3 id="contracts">Contracts:</h3><p>We defined the main concepts first (Message, Event, Task, Endpoint, Handler, MessageOwner, etc.). After several iterations, these were refined into clear interfaces in C#.</p><h3 id="implementation">Implementation:</h3><p>We compared Kafka and RabbitMQ. Through PoCs, RabbitMQ proved to be the better fit.</p><p>The design also established the <strong>code structure</strong>:</p><ul><li>Assemblies with clear roles and dependencies</li><li>Rules for referencing across services</li><li>How and where messages owned by other services would be written</li></ul><p>Finally, we created demos to show how events and tasks would be published and handled.</p><h2 id="consistency-across-services">Consistency Across Services</h2><p>The same project had over a dozen services. Two examples—<strong>DataManagement</strong> and <strong>LongRunningFlows</strong>—were very different in purpose, yet shared the same coding <strong>structure</strong> and dependency principles. This consistency was achieved through detailed <strong>code design</strong> and technical guidelines.</p><hr><h2 id="the-value-of-detailed-technical-design">The Value of Detailed Technical Design</h2><p>The effort required for Detailed Technical Design is modest compared to implementation.</p><p>Designing a service or component usually takes two to five days. Sometimes, more time is needed for PoCs and technology evaluations, which should be handled separately.</p><p>By formalizing the design before writing code, you save time, reduce risks, and build systems that are easier to maintain.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ 10 Practical Approaches That Shape My Work in Software Architecture ]]>
            </title>
            <description>
                <![CDATA[ After 13+ years in software architecture, I’ve defined 10 key practices that help deliver predictable, maintainable software—covering design structure, process, team setup, and infrastructure. Here&#39;s how I keep complexity under control and systems built to last. ]]>
            </description>
            <link>https://oncodedesign.com/blog/10-practical-approaches-that-shape-my-work-in-software-architecture/</link>
            <guid isPermaLink="false">687a1a04d93ed60001b95c2a</guid>
            <category>
                <![CDATA[ Software Architecture ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Fri, 18 Jul 2025 12:58:36 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/07/KD0503---10-Practices-I-Apply-as-a-Software-or-Solution-Architect-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>For over 13 years, I’ve worked in various software architecture roles—across startups and established companies alike. I’ve focused on projects with long-term development goals, because that's where good <strong>software design</strong> really makes a difference.</p><p>Over time, I’ve refined a set of working practices that help ensure stable execution, technical clarity, and <strong>predictability</strong> in delivery. Especially in recent years, my approach has evolved further, shaped by both advanced training and the lessons learned from real project work.</p><p>Here’s what I rely on:</p><h2 id="1-avoid-splitting-by-functionality">1. Avoid Splitting by Functionality</h2><p>One of the most impactful changes I made was to stop decomposing systems by business functions. Instead, I identify areas that are likely to evolve independently—grouping logic around those axes of change. This method leads to a <strong>code design</strong> that localizes change and protects the overall structure from ripple effects, supporting long-term maintainability.</p><h2 id="2-maintain-separation-of-concerns-in-code">2. Maintain Separation of Concerns in Code</h2><p>Even with a solid architectural model, it’s easy for the actual implementation to drift. I enforce separation of concerns by putting clear boundaries in the <strong>code design</strong>, especially when it comes to dependencies between modules. This keeps complexity in check and ensures that architectural intent is preserved over time.</p><h2 id="3-stay-out-of-feature-development">3. Stay Out of Feature Development</h2><p>It may sound counterintuitive, but I intentionally avoid building features. Developers should own the domain. As an architect, I stay focused on structure and delivery. Getting involved in individual features creates distractions and pulls attention toward short-term urgency at the cost of long-term system health.</p><h2 id="4-build-a-custom-application-infrastructure">4. Build a Custom Application Infrastructure</h2><p>Creating infrastructure components tailored to the project’s needs is how I bridge design and implementation. These are abstracted building blocks that simplify the work of feature development teams, reduce duplication, and standardize behavior. This kind of foundational setup makes writing <strong>unit tests</strong> and adding new features much easier.</p><h2 id="5-shape-the-development-process">5. Shape the Development Process</h2><p>Good <strong>software design</strong> alone isn’t enough. The process matters too. I actively work with teams to define how design, development, and validation happen. It’s important to create space for technical design to happen independently, and to structure work in a way that aligns with the system’s complexity and business priorities.</p><h2 id="6-use-an-activities-network-not-a-task-list">6. Use an Activities Network, Not a Task List</h2><p>Instead of linear backlogs, I plan around a network of activities and dependencies. This approach improves visibility for all roles—developers, managers, testers—and helps us manage change effectively. It’s a practical tool that supports <strong>predictable delivery</strong> in dynamic environments.</p><h2 id="7-actively-support-project-staffing">7. Actively Support Project Staffing</h2><p>Staffing decisions directly affect technical outcomes. I collaborate with PMs to plan who joins the team, at what point, and with what level of experience. Matching skills to the structure of the system—rather than leaving it to chance—can significantly improve delivery quality and speed.</p><h2 id="8-communicate-progress-early-and-often">8. Communicate Progress Early and Often</h2><p>Early in the project, there's little visible functionality. Still, I make sure to show progress—milestones in setting up tools, making architectural decisions, or validating approaches. Keeping stakeholders informed helps build trust and reduces unnecessary pressure on the team.</p><h2 id="9-delay-features-until-the-foundation-is-ready">9. Delay Features Until the Foundation Is Ready</h2><p>Most of the initial work in a serious project is foundational: CI/CD setup, infrastructure, and system design. That’s deliberate. Once enough groundwork is in place, I look for opportunities to start building features in parallel. Done right, this staged delivery helps manage risk without delaying visible outcomes.</p><h2 id="10-estimate-time-and-cost-with-structure">10. Estimate Time and Cost with Structure</h2><p>Reliable estimates require more than guesswork. I correlate the activities network with staffing plans to produce grounded forecasts. This helps manage expectations and leaves room to adapt. <strong>Predictability</strong> comes from this discipline—not just optimism.</p><h2 id="final-thoughts">Final Thoughts</h2><p>These practices help me keep a clear focus on <strong>design quality</strong>, technical clarity, and the ability to deliver predictable results. Whether it’s through clean <strong>code design</strong>, structured estimation, or modular infrastructure that supports <strong>unit testing</strong>, these principles give me the tools to build systems that scale and last.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Project Design in Software Delivery: An Engineering Approach ]]>
            </title>
            <description>
                <![CDATA[ Project Design brings an engineering approach to software projects. It enables Predictability through clear planning, accurate estimation, and strong collaboration between architect and PM. Learn how Code Design and Training improve delivery outcomes. ]]>
            </description>
            <link>https://oncodedesign.com/blog/project-design-in-software-delivery-an-engineering-approach/</link>
            <guid isPermaLink="false">686b96959305ed00017aea01</guid>
            <category>
                <![CDATA[ design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 07 Jul 2025 12:48:25 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/07/KD0502---Project-Design-and-Software-Project-Management-blog.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>I first encountered <strong>Project Design</strong> during my <strong>IDesign</strong> <strong>Training</strong> as an architect. The concept is also thoroughly discussed in Juval Löwy’s book <em>Righting Software</em>.</p><p>Even after applying it on real projects, I still find it hard to explain clearly—and even harder to convince others of its value.</p><p>At its heart, <strong>Project Design</strong> is an engineering approach to planning that aims to deliver on time and within budget. For teams unfamiliar with a truly engineering-led method of planning and running software projects, it often sounds too good to be true.</p><p>Many dismiss it outright, calling it “waterfall” without understanding what it actually is.</p><h2 id="what-does-project-design-mean">What Does Project Design Mean?</h2><p>Just as you <strong>Design</strong> a software system, you also have to <strong>Design</strong> the project itself.</p><p><strong>Project Design</strong> includes:</p><ul><li><strong>Accurate Estimation of Duration and Cost</strong><br>Done like an engineer would: no guesswork or wishful thinking, but clear calculations based on realistic assumptions.</li><li><strong>Building Multiple Executable Options</strong><br>Each option considers staffing levels, team skills, milestones, and business goals. These options will differ in time, cost, and risk—giving leadership the ability to choose deliberately.</li><li><strong>Validating the Plan</strong><br>Can this team really deliver this plan? Would I be able to do it with these people? The plan must match the team’s real capabilities.</li></ul><h2 id="who-is-responsible-for-project-design">Who Is Responsible for Project Design?</h2><p>This is the architect’s job—not the project manager’s.</p><p>The architect works closely with the PM, but takes the lead in making the plan. It's an engineering process with trade-offs, calculations, creative solutions, and constraints. Fundamentally, it’s about making <strong>informed decisions</strong>.</p><h2 id="project-design-vs-project-management">Project Design vs. Project Management</h2><p><strong>Project Design</strong> is to project management what <strong>System Design</strong> is to coding: it’s the blueprint.</p><p>Architecture is the design of the system; coding implements it. Similarly, <strong>Project Design</strong> is the plan for how the project will be executed, and <strong>project management</strong> is the actual execution of that plan.</p><h2 id="how-does-project-design-work-in-practice">How Does Project Design Work in Practice?</h2><p><strong>Project Design</strong> builds on <strong>System Design</strong>. The system architecture describes what needs to be built and the technical dependencies. From there:</p><ul><li>The architect and PM identify all activities and their dependencies.</li><li>This is turned into an <strong>Activities Network Diagram</strong>—a visual representation of the plan.</li><li>Analysis and calculations are done on this diagram to support better decisions.</li><li>Staffing requirements are planned in detail.</li><li>The result is a clear, actionable set of “assembly instructions” for the project.</li></ul><h2 id="the-lego-analogy">The Lego Analogy</h2><p>Think about a Lego set. The picture on the box shows the finished model. The bags of bricks give you the parts. But neither is enough.</p><p>What you really need is the <strong>assembly guide</strong>—the instructions telling you the sequence to put pieces together, what can be done in parallel, and how components fit.</p><p><strong>Project Design</strong> delivers these assembly instructions for software projects: defining the order of work, identifying parallel tasks, integration points, and dependencies. <strong>Architecture alone isn’t enough.</strong></p><hr><h2 id="tools-and-monitoring">Tools and Monitoring</h2><p>A well-done <strong>Project Design</strong> equips project management with:</p><ul><li>A plan they can actually track.</li><li>Tools to detect when execution is off-course.</li><li>The ability, together with the architect, to take early corrective action and avoid big delays.</li></ul><h2 id="the-value-of-project-design">The Value of Project Design</h2><p><strong>Project Design</strong> takes practice and learning to master. But even partial use of this approach can greatly improve <strong>predictability</strong> in delivery.</p><p>Bringing <strong>Code Design</strong> discipline to project planning makes it far more likely to deliver successfully, on time, and on budget.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Enabling Predictable Delivery with Application Infrastructure ]]>
            </title>
            <description>
                <![CDATA[ Learn how Application Infrastructure enforces architecture in code to support predictable delivery. Improve code design, reduce complexity, and boost team efficiency in long-term software projects. ]]>
            </description>
            <link>https://oncodedesign.com/blog/enabling-predictable-delivery-with-application-infrastructure/</link>
            <guid isPermaLink="false">682c425f9d97f100012b5ec4</guid>
            <category>
                <![CDATA[ enforce consistency ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 20 May 2025 12:05:42 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/05/KD0501---Ensuring-Architectural-Integrity-with-Application-Infrastructure-3.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>When it comes to <strong>code design</strong> that holds up over time, a solid architecture isn’t enough. You also need the mechanisms in place to make sure it gets implemented correctly. That’s where <strong>Application Infrastructure</strong> plays a key role. It’s a practical and effective way to support and enforce your architectural decisions directly in the codebase.</p><p>For more than 15 years, I’ve relied on this approach while leading the design and implementation of complex systems. It’s helped me build software that remains maintainable and predictable — even as teams evolve, systems scale, and requirements shift.</p><h3 id="what-is-application-infrastructure">What Is Application Infrastructure?</h3><p><strong>Application Infrastructure</strong> refers to a set of technical building blocks that are specific to your application. These components define and enforce the structure of your code, making it easier to follow architectural guidelines and harder to go against them.</p><p>You might wonder: isn’t that what frameworks or foundations are for? They’re close, but not quite. Application Infrastructure is more tailored. It includes structure and guidance like a framework but also provides tooling — a toolkit — that helps reduce complexity and enables developers to move faster while doing the right thing.</p><h3 id="why-it-matters">Why It Matters</h3><p>Even the best <strong>project design</strong> can fall apart in implementation. I’ve seen good architectures fail because the code wasn’t structured to support them. Whether due to time pressure, misunderstandings, or lack of training, the result is the same: a system that doesn’t meet its goals.</p><p>In one of my earlier projects, we followed a 4-tier architecture. The structure was sound, but developers started bypassing the service layer and accessing the database directly from the UI. It worked, feature-wise, but broke the core non-functional requirements. When we introduced Application Infrastructure, the rules became enforced through the structure itself — and code that violated them simply couldn’t be written anymore.</p><p>This isn’t limited to traditional architectures. The same principles apply to microservices, cloud-native apps, and modern web or desktop clients. The goal is the same: <strong>predictability</strong> in how code is written, maintained, and evolved.</p><h3 id="how-it-works">How It Works</h3><p>The heart of Application Infrastructure lies in enforcing rules for how code is organized and interacts. For example, in a .NET environment, I usually:</p><ul><li>Separate technical components into an <code>Infrastructure</code> layer, which contains no business logic</li><li>Create <code>Contract</code> assemblies for interfaces and DTOs only — no logic here either</li><li>Place use-case implementations under a <code>Modules</code> folder, organized by functional boundaries</li></ul><p>Clear reference rules govern which parts can depend on others, and these rules are verified through static analysis and CI pipelines.</p><p>To keep Dependency Injection clean and automated, we use a <strong>Type Discovery</strong> mechanism — a lightweight library that handles registration while respecting the defined structure.</p><h3 id="real-benefits-in-real-projects">Real Benefits in Real Projects</h3><p>Application Infrastructure delivers concrete benefits:</p><ul><li><strong>Code consistency</strong>: Patterns emerge naturally, and similar problems are solved in similar ways.</li><li><strong>Team Volatility</strong>: New team members learn faster because the structure is self-explanatory.</li><li><strong>Modular ownership</strong>: Even in a monolith, teams can take ownership of individual modules, because there are no hard dependencies at build among them</li><li><strong>Reduced framework coupling</strong>: Wrapping external libraries behind internal APIs aligns with Clean Architecture.</li><li><strong>Lower complexity</strong>: Infrastructure handles the plumbing — messaging, security, communication — and exposes clean APIs tailored to your app.</li></ul><h3 id="trade-offs-and-considerations">Trade-offs and Considerations</h3><p>This approach isn’t without effort. You need to build a good portion of the infrastructure early. With experience, though, you learn how to deliver just enough to support early features and evolve it along the way.</p><p>From a team perspective, you can either rotate developers into infrastructure work or dedicate a small team — especially when senior devs are limited. Either way, the return on investment is high.</p><h3 id="why-it-matters-for-learning-and-training">Why It Matters for Learning and Training</h3><p>If you’re doing <strong>software training</strong>, mentoring, or investing in team <strong>learning</strong>, Application Infrastructure becomes even more valuable. It reinforces the lessons you teach about design patterns, separation of concerns, and maintainable architecture — directly in the day-to-day coding process.</p><p>It’s one of the most effective ways I’ve found to bridge the gap between theory and practice in software development.</p><hr><p>If you're looking for more predictability in your software delivery, or if you're teaching teams how to build systems that last, start with your Application Infrastructure. It’s one of the best tools we have for making design real — and keeping it that way.</p> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Designing a Distributed System for Long-Term Development ]]>
            </title>
            <description>
                <![CDATA[ Building and evolving a complex system in production for years demanded high technical quality, even with team volatility. This session shares our project&#39;s story, focusing on strategies for sustainable long-term development. ]]>
            </description>
            <link>https://oncodedesign.com/talks/designing-a-distributed-system-for-long-term-development/</link>
            <guid isPermaLink="false">67b783aaac33d70001151f39</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 20 Feb 2025 21:52:00 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/02/LongTermDev-640x445.png" medium="image" />
            <content:encoded>
                <![CDATA[ <h2 id="Abstract">Abstract</h2>
<blockquote>
<p>Until recently, I served as the Solution Architect for a distributed system in the energy sector—a system critical to grid balancing and enabling energy trading. It was a greenfield project that we built from the ground up.</p>
</blockquote>
<blockquote>
<p>Developing and evolving such a complex system over several years, while keeping it in production, required us to uphold a high level of technical quality—especially in the face of team changes. This project allowed me to put into practice the experience I've accumulated for designing distributed systems with long-term development in mind.</p>
</blockquote>
<blockquote>
<p>In this session, I'll share the story of this project, highlighting design strategies that supported long-term development. Drawing on over 15 years of experience in high-pressure projects that demanded availability, reliability, and precision, I’ll provide practical insights into creating systems that will stand the test of time.</p>
</blockquote>
<h2 id="Resources">Resources</h2>
<ul>
<li>Slides: <a href="https://www.slideshare.net/slideshow/designing-a-distributed-system-for-long-term-development/272885386?ref=oncodedesign.com">Slide Share</a></li>
<li>Recording: <a href="https://youtu.be/hqSQ03vL2sc?ref=oncodedesign.com">Codecamp 2024</a></li>
<li>Referenced code snippets: <a href="https://github.com/onCodeDesign/Code-Design-Training?ref=oncodedesign.com">Code Design Training on GitHub</a></li>
<li>iQuarc.AppBoot: <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">on GitHub</a></li>
<li>iQuarc.DataAccess: <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">on GitHub</a></li>
</ul>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Implementing Clean Architecture ]]>
            </title>
            <description>
                <![CDATA[ When projects technically fail, the reason is frequently uncontrolled complexity – where Clean Architecture remains a concept on paper, not in code. This talk demonstrates how to achieve predictability by implementing Clean Architecture based on a code structure, not just discipline or code reviews. ]]>
            </description>
            <link>https://oncodedesign.com/talks/implementing-clean-architecture/</link>
            <guid isPermaLink="false">67b78450ac33d70001151f4e</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 20 Feb 2025 21:50:00 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2025/02/why-architecture.png" medium="image" />
            <content:encoded>
                <![CDATA[ <h2 id="Abstract">Abstract</h2>
<blockquote>
<p>Has implementing Clean Architecture become more of an ideal than a reality in your projects? Despite its clear rules and intended separations, the complexity of growing codebases and the crunch of time often render these principles invisible in practice. When projects technically fail, the reason is often <em>uncontrolled complexity</em> – where Clean Architecture remains a concept on paper, not in code.</p>
<p>In this session, I will show how to get predictability by implementing Clean Architecture through structure, rather than relying on discipline and code reviews only. I’ll show how to create a structure that makes it easy to write the code that follows your architecture and at the same time it makes it difficult to write the code that doesn’t.</p>
<p>You will walk away with a recipe and building blocks for creating a foundation in code that sustains Clean Architecture and the level of quality in your code that can control the complexity of the project.</p>
<p>This structure will enforce separation of concerns and how dependencies are created. It will deliver predictability by creating a Code Design that is maintainable, extensible and reusable.</p>
</blockquote>
<h2 id="Resources">Resources</h2>
<ul>
<li>Slides: <a href="https://www.slideshare.net/slideshow/implementing-clean-architecture-conference-talk-41e2/272015963?ref=oncodedesign.com">Slide Share</a></li>
<li>Recording: <a href="https://youtu.be/P8aJlrXo2Yw?ref=oncodedesign.com">Codecamp 2024</a></li>
<li>Referenced code snippets: <a href="https://github.com/iQuarc/Code-Design-Training?ref=oncodedesign.com">Code Design Training on GitHub</a></li>
<li>iQuarc.AppBoot: <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">on GitHub</a></li>
<li>iQuarc.DataAccess: <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">on GitHub</a></li>
</ul>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Enforce Consistency with Clean Architecture ]]>
            </title>
            <description>
                <![CDATA[ When projects fail for reasons that are primary technical, the reason is often uncontrolled complexity. The complexity goes out of hand when the code lacks structure, when it lacks Clean Architecture. 
In this session I will show how we can achieve consistency through structure ]]>
            </description>
            <link>https://oncodedesign.com/talks/enforce-consistency-with-clean-architecture/</link>
            <guid isPermaLink="false">67b7860eac33d70001151f65</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 20 Feb 2025 21:45:51 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <h2 id="Abstract">Abstract</h2>
<blockquote>
<p>When projects fail for reasons that are primary technical, the reason is often uncontrolled complexity. The complexity goes out of hand when the code lacks structure, when it lacks Clean Architecture. In large software projects where many developers work on the same code base one of the biggest challenges is to get consistency in code, to create development patterns for common problems, so you can control the complexity and size of the system.</p>
</blockquote>
<blockquote>
<p>In this session I will show how we can achieve consistency through structure, rather than relying on discipline only. We will look at some basic building blocks of an application infrastructure which will enforce the way dependencies are created, how dependency injection is used or how separation of the data access concerns is enforced.</p>
</blockquote>
<h2 id="Resources">Resources</h2>
<ul>
<li>Slides: <a href="https://www.slideshare.net/FlorinCoros/enforce-consistentcy-with-clean-architecture?ref=oncodedesign.com">on SlideShare</a></li>
<li>Code snippets on the slides: <a href="https://github.com/iQuarc/Code-Design-Training?ref=oncodedesign.com">Code Design Training on GitHub</a></li>
<li>Code Design Training: <a href="https://oncodedesign.com/training-code-design">description</a></li>
<li>Implementing Clean Architecture Training: <a href="https://oncodedesign.com/training-clean-architecture/">description</a></li>
<li>iQuarc.AppBoot: <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">on GitHub</a></li>
<li>iQuarc.DataAccess: <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">on GitHub</a></li>
</ul>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Why Programmers Should Challenge the Stakeholders ]]>
            </title>
            <description>
                <![CDATA[ Have you ever worked on a state of the art programming project? And then a new requirement arrives, it&#39;s implemented and suddenly you&#39;re working on a legacy project? ]]>
            </description>
            <link>https://oncodedesign.com/blog/why-programmers-should-challenge-the-stakeholders/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76bb2</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 14 Aug 2018 08:28:07 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/08/challenge-everything-2.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <p>Have you ever worked on a state of the art programming project? And then a new requirement arrives, it's implemented and suddenly you're working on a legacy project?</p>
<p>How it usually starts it's like this: the product owner comes and says that they need a project to do a task, be it simple or complex. The architect and the team think about the design, think about how data should be stored, processed and displayed. The project starts, latest and best tools and frameworks are used. This would be a great project. All the team ever hoped for was a new, greenfield project. Everyone is motivated. This time it would be different.</p>
<p>The first version is great. The UI looks very posh, it's latest fashion, the database has the perfect balance between the 3rd normal form and the performance requirements. You even asked the stakeholders about what are the nonfunctional requirements and did some automated tests around that. You may have some small performance problems, in some strange cases, but after you'll see how the application is used, you will find the best solution for that in v2.</p>
<p>People are seeing the application and they are charmed and want more out of it. This is where things get tricky. You built a product for one purpose. And having that and all the best practices in mind you made the design. Now, as the things evolve, the purpose changes. It happens almost every time, and numerous books where written in order to helping us, developers, protect our project from this. Robert C. Martin is just the most famous one to having built a writer and speaker carrier on this topic. The Open/Closed Principle is the best example of a good code design practice for minimizing the risk of breaking the project in v2+. But nobody tells you, the developer, how to approach the new requirements, how to try to understand the core need and model it in order to come with a solution that is the best compromise between the business and the technical sides.</p>
<p>Let's say for example you need to display all the orders created in a store. First, you'll have a table with columns for the number of the order, the date, the state, the value, and the possibility to see the details. This is great, clear and straight forward. Then the users realize that they not only want to filter the table, but want to see in separate tabs new, processing and closed orders. Because it would be easier to just have a tab right there, click on it. Of course, the product owner still wants to see everything aggregated, someone must have gotten used to that. After seeing them separately new functional ideas come to mind. The processing orders should have a new column with a new status, like "ordered from supplier/in transit from supplier/in stock/sent for delivery". These do not make sense for new or closed orders. So the "All" tab will be inconsistent in some way. Either you have the new column populated just for processing orders, or you take this column out, or you will populate it with closed or new for these two statuses. Either way, your model will not be 100% compatible with your view. And this is where a well written code should either signal a problem in the specs, and you, as a professional can either challenge as much as you can the requirements, or take it as it is, listen to the product owner, admit that it's just one new column, shouldn't be that much work, and end up, without realizing in a legacy project.</p>
<p>There is a trap in trying to align the magnitude of the business requirements with the magnitude of the changes needed in code. It's "just" a new column, it's "just" a new view like the old one with minor changes. What we as developers should do is detect those moments when the functionality evolves even a little bit away from the old one. In that moment we should either make a point that the purpose of the application is no longer the same, or that we need to make structural changes, probably even in the requirements.</p>
<p>We lack the concept of "functional refactoring". Wouldn't it be nice from time to time to analyse the capabilities of our application and reorganize or delete the legacy ones? An application that lived more than a few years and had grown during that time it's usually a monster, functionally speaking: users can do stuff nobody remembers it's possible, old functionalities are still there even if nobody uses them, users sometimes discover flows no one thought about and are using them day by day. It looks a little bit like Istanbul: it used to be magnificent, it's still magnificent here and there, but it's growth is out of control and it looks like an unpredictable amazing monster these days.</p>
<p>So that's the problem. What's the solution?</p>
<p>1.&nbsp;<strong>Listen to your code.</strong>&nbsp;If you think you must add a forced conditional, whether it's an if, of some kind of polymorphism, or dynamically composed screens, no matter how cool it is from the technical point of view, take a closer look. Is it a natural evolution of the code and functional behavior? If not you should challenge it.</p>
<p>2.&nbsp;<strong>Know your business domain.</strong>&nbsp;You should know what every stakeholder is gaining from the product. You should know how the users are behaving inside the app, how the roadmap for the product development is aligned to the current requirements, why and when are they needed. Try to find out all these things. Log flows in the app to see what the users are doing, ask around the organization if people are familiar with the app, ask the decision makers what they think about it, what it's missing, what do they think are it's strengths. Listen carefully to the boring organizational status meetings if products are presented. Look to your competitors. These all are sources of information that we developers are constantly ignoring because we think it would to be too much to fit in our heads. But by knowing these things, and paying attention to these details we can add real value because this is where you learn how the product should evolve. Everyone should be able to write code these days; internet is helping, IDEs are helping, there are a lot of good books and talks about good programming practices. What a great developer should be able to do is identify the real needs from his project and make them happen.</p>
<p>3.&nbsp;<strong>Always ask the stakeholders what is the basic need.</strong> When you are required to make some modifications, ask why if it's not obvious. Maybe the need can be satisfied better. Maybe, going back to our example, a new screen would be better than adding a new column. The user might need afterwards to be able to also update the processing orders in ways that would not apply to new and closed orders.</p>
<p>4.&nbsp;<strong>Challenge requirements.</strong>&nbsp;The idea of adding a new column on the tab with all orders is a bad one. Not uncommon, just bad. The users will have mixed information, will need to do classifications in their own mind in order to understand the presented information. You should ask the product owner why that tab is needed. If it's because aggregating all the information is found in every app, that is a bad answer. If it's because the user is accustomed to seeing this as a first screen, it's also bad. If it's because the aggregated data provides important insights, like how many orders are received in a day, then you have a better answer. But then the display of all information available for all order types it's useless. Users will, most probably, change tabs for the detailed and filtered information. In my experience, inconsistencies in the requirements, which of course are reflected in the code, are the most fragile and volatile parts of an application. If you sense that your design is unnaturally modified in order to implement a shaky solution, raise the problem. Try to find a better solution for the basic need.</p>
<p>5.&nbsp;<strong>Do not be afraid to provide out of the box solutions.</strong>&nbsp;Your most important advantage is that you see things from a different perspective. The PO can be stuck on one solution and the user can be stuck on one face of the problem. You, with the knowledge of the code, and without the pressure of being expected to come up with the perfect solution are in the best position to be able to provide a new perspective. A new way to approach the problem, a new way to solve it.</p>
<p>6.&nbsp;<strong>Do not over reuse UI code.</strong> I&nbsp;am not providing technical solutions here, in this article. On the contrary, I'm saying that for once we should trust our code when it screams that the functionality is bad, and help it stay clean by changing the requirements. That being said... The UI will change often, it's the most volatile part of the project. And I'm not talking about CSS or overall style, which of course should be uniform and shared. I'm taking about the two screens that seem almost identical in v1, and became totally different in v2. It's usually stated like this: "We have these two screens, which should look the same, but the second one will have some small additions, like being able to edit, delete, listen for updates, nothing much". Never, and I mean it, never reuse the UI code between these two because it will come and haunt you, and this is what you will be talking about in the project's lessons learnt meeting.<br><br>
7. Do not forget that&nbsp;<strong>"Thinking is always a best practice"</strong>.&nbsp;All our processes, all the measurements, all the paradigms, all the patterns, are made to made our work easier, i.e. to help us increase productivity without increasing brain effort. It's easy then to follow the good or bad practices blindly, because everybody does it. I dare you to stop for a moment and think if what you do every day makes sense to you, your project, your organization. If not, challenge it.</p>
<p>In my experience the worst code decisions come from the worst business decisions. In those moments when it just didn't feel right what was done in the code, a wrong feature was implemented. Almost always in these cases, after releasing, there was another stakeholder that was under the impression that a feature should look different, or behave different. And it is normal. For a good user experience a product should be very clear in its intentions. Tweaking functionalities in order to add abilities in unorthodox places destroys this clarity, and the first place where you can see that happening is in the code. This is why it is the responsibility of every developer to challenge their tasks when those don't seem right. Remember that at the end of the day, your work's purpose is to have an excellent product, and not to be able to say: I was able to do what I was told to.</p>
<h6 id="featured-image-from-let-go-and-live">Featured image from <a href="http://letgoandlive.me/challenge-everything-2017/?ref=oncodedesign.com">Let Go and LIVE</a></h6>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Migrate to Ghost 1.x on Azure ]]>
            </title>
            <description>
                <![CDATA[ You might remember that I am running this blog on Ghost self hosted on Azure. I wrote about how I&#39;ve migrated from Wordpress in an older post. Now, I migrated to Ghost 1.x, and in the process I configured a better dev environment for my theme and auto deployment to an App Service from GitHub ]]>
            </description>
            <link>https://oncodedesign.com/blog/migrate-to-ghost-1-x-on-azure/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76bae</guid>
            <category>
                <![CDATA[ ghost ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 03 Apr 2018 08:45:00 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/azure-appservice-ghost.png" medium="image" />
            <content:encoded>
                <![CDATA[ <p>You might remember that I am running this blog on <a href="https://ghost.org/?ref=oncodedesign.com">Ghost</a> self hosted on Azure. I've written about how I've migrated from Wordpress <a href="https://oncodedesign.com/my-wordpress-to-ghost-journey">here</a>.</p>
<p>It turned out that upgrading to newer versions of Ghost wasn't as easy as I thought. I've installed Ghost in an Azure App Service from <a href="https://github.com/felixrieseberg/Ghost-Azure?ref=oncodedesign.com#running-locally">this</a> Github repository by <a href="https://felixrieseberg.com/?ref=oncodedesign.com">Felix Rieseberg</a>, and I was relying on his <a href="https://github.com/felixrieseberg/Ghost-Updater-Azure?ref=oncodedesign.com">Ghost-Updater-Azure tool</a> for the updates. The update tool worked only once for me. That took me to Ghost v0.78. For the next versions it didn't work anymore. It was just doing nothing without any error message. Each time I tried to manually upgrade to a newer version it took more than a couple of hours, so I've quit each time.</p>
<p>Last year, Ghost announced a <a href="https://blog.ghost.org/1-0/?ref=oncodedesign.com">major release</a>, version 1.0, and I was still on v0.78, so I had to migrate. I started with three migration goals in mind:</p>
<ol>
<li>easy to do</li>
<li>have an easy path for future upgrades</li>
<li>host on Azure using PaaS</li>
</ol>
<p>I started from the official <a href="https://docs.ghost.org/v1.0.0/docs/migrating-to-ghost-1-0-0?ref=oncodedesign.com">Migrating to 1.0.0</a> guide, which made it clear that I will need a new fresh installation to which I should copy and import my theme and the content.</p>
<h2 id="ghost-cli">Ghost CLI</h2>
<p>Given my previous experiences with upgrading Ghost, I was happy to see that they now provide this <a href="https://docs.ghost.org/v1.0.0/docs/ghost-cli?ref=oncodedesign.com">Ghost-CLI</a> tool that is supposed to install, configure and upgrade Ghost through simple command line commands. Unfortunately, it doesn't work well on an Azure Web App based on Windows, so I didn't succeed in using it.</p>
<h2 id="ghost-v1x-on-azure-app-service-linux">Ghost v1.x on Azure App Service Linux</h2>
<p>Trying to get to a more reliable way of upgrading in the future, I insisted to use the Ghost-CLI and I tried to install it on an Azure App Service running Linux. I have found this tutorial which should help me do it: <a href="https://ourwayoflyf.com/ghost-v1-0-on-app-service-linux/?ref=oncodedesign.com">https://ourwayoflyf.com/ghost-v1-0-on-app-service-linux/</a>.</p>
<p>This may be a good path for someone who is familiar with NodeJS, MySQL, Linux stack. That's not me :( I got stuck at the Knex Migration error that is also mentioned in the above blog.</p>
<p>After the deployment which I did with the provided template, I got Ghost v1.22, so a good opportunity to use the <code>ghost update</code> command (part of the Ghost-CLI tool) to try an update. It gave the error:</p>
<pre><code>root@a8fbdc1b2630:/var/lib/ghost# ghost update
Checking for latest Ghost version
Downloading and updating Ghost to v1.22.0 &gt; Installing dependencies &gt; info
...
Running database migrations
An error occurred.
Message: 'Command failed: knex-migrator-migrate --init --mgpath /var/lib/ghost/current
[2018-04-02 07:36:00] ERROR

NAME: MigrationScript
MESSAGE: task.execute is not a function

level:normal

Error occurred while executing the following migration: 1-add-webhooks-table.js
MigrationScript: task.execute is not a function
    at MigrationScript.KnexMigrateError (/usr/local/lib/node_modules/ghost-cli/node_modules/knex-migrator/lib/errors.js:7:26)
    at new MigrationScript (/usr/local/lib/node_modules/ghost-cli/node_modules/knex-migrator/lib/errors.js:26:26)
    at /usr/local/lib/node_modules/ghost-cli/node_modules/knex-migrator/lib/index.js:353:19
...

TypeError: task.execute is not a function
    at /usr/local/lib/node_modules/ghost-cli/node_modules/knex-migrator/lib/index.js:308:25
    at tryCatcher (/usr/local/lib/node_modules/ghost-cli/node_modules/bluebird/js/release/util.js:16:23)
...

</code></pre>
<p>A database migration error. Might be a bug in the Ghost-CLI or in this Ghost version or simply something wrong with the Docker image I was using. Anyhow, it didn't seem like a good or a simple path for me to continue on, so I've dropped it.</p>
<h2 id="deploy-from-github-to-azure-app-service">Deploy from GitHub to Azure App Service</h2>
<p>Not having success with the Ghost-CLI, I've decided to make a deployment from GitHub to Azure App Service, using the <a href="https://github.com/solvsoftware/Ghost-Azure?ref=oncodedesign.com">Ghost-Azure repository</a> made by <a href="https://github.com/RadoslavGatev?ref=oncodedesign.com">Radoslav Gatev</a>, which configures Ghost for a deployment in an Azure App Service. This worked, and the entire migration steps that I'll detail next are part of this approach.</p>
<p>First, I have made a <a href="https://github.com/florinc/onCodeDesign.com?ref=oncodedesign.com">fork</a>, so I can do the customizations needed for my blog.</p>
<h3 id="the-upgrading-plan">The Upgrading Plan</h3>
<p>For upgrading to next version of Ghost, my plan is to keep this fork up to date and Sync it with the Azure App.</p>
<p>This means to get the latest version of Ghost from the upstream <a href="https://github.com/solvsoftware/Ghost-Azure?ref=oncodedesign.com">Ghost-Azure repo</a>. (<a href="https://help.github.com/articles/syncing-a-fork/?ref=oncodedesign.com">Here</a> is how to sync a fork). In the case that this repo is not updated I could get the latest Ghost version from its original <a href="https://github.com/TryGhost/Ghost?ref=oncodedesign.com">repo</a>. Then, I could make a pull request back to Ghost-Azure repo.</p>
<p>Once the latest Ghost version is merged in my fork, the next step would be to test that it works for my blog. Having the repo cloned locally, I can easily run the blog on my machine and do the checks. If I want I could also copy the latest content (DB and images) from the server, through FTP, to see that the DB gets migrated to latest version and that everything works as it should.</p>
<p>Then, if all good, I should push the changes to the <code>onCode</code> branch, which is the branch connected with my Azure Web App. The only thing left would be to go in the Azure portal and click <em>Sync</em> in the <em>Deployment Options</em></p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/azure-portal-deployment-sync.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/azure-portal-deployment-sync.png" alt="azure-portal-deployment-sync" loading="lazy"></a></p>
<p>This is the plan :) I'll let you know how it works when I'll do the first upgrade.</p>
<h3 id="the-deployment">The Deployment</h3>
<p>I did the initial deployment using the Azure deployment template from the repository, following <a href="https://www.gatevnotes.com/ghost-on-azure-app-service/amp/?ref=oncodedesign.com">Radoslav Gatev</a> blog post.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/Azure-deployment-template.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/Azure-deployment-template.png" alt="Azure-deployment-template" loading="lazy"></a></p>
<p>It did create all the resources, but it didn't set the application settings, which I had to configure afterwards in the portal.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/ghost-application-settings.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/ghost-application-settings.png" alt="ghost-application-settings" loading="lazy"></a></p>
<p>This wasn't a problem except for the <code>WEBISTE_NODE_DEFAULT_VERSION</code>. After the deployment, it was the only setting, and was <code>6.9.1</code>. This setting says which node version the web app should use.</p>
<p>When I went into the <em><a href="https://github.com/projectkudu/kudu/wiki/App-Service-Editor?ref=oncodedesign.com">App Service Editor</a></em>, (which is very cool by the way) and executed in the console <code>node db.js</code> everything went well and it created the database. However, when I executed the <code>npm rebuild</code> it ended with some errors:</p>
<pre><code>D:\home\site\wwwroot\node_modules\dtrace-provider&gt;if not defined npm_config_node_gyp (node "D:\Program Files (x86)\npm\3.10.8\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild )  else (node "" rebuild ) 
Building the projects in this solution one at a time. To enable parallel build, please add the "/m" switch.
D:\home\site\wwwroot\node_modules\dtrace-provider\build\DTraceProviderStub.vcxproj(20,3): error MSB4019: The imported project "D:\Microsoft.Cpp.Default.props" was not found. Confirm that the path in the &lt;Import&gt; declaration is correct, and that the file exists on disk.
gyp ERR! build error 
gyp ERR! stack Error: `D:\Program Files (x86)\MSBuild\14.0\bin\msbuild.exe` failed with exit code: 1
gyp ERR! stack     at ChildProcess.onExit (D:\Program Files (x86)\npm\3.10.8\node_modules\npm\node_modules\node-gyp\lib\build.js:276:23)
gyp ERR! stack     at emitTwo (events.js:106:13)
gyp ERR! stack     at ChildProcess.emit (events.js:191:7)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:215:12)
gyp ERR! System Windows_NT 10.0.14393
gyp ERR! command "D:\\Program Files (x86)\\nodejs\\6.9.1\\node.exe" "D:\\Program Files (x86)\\npm\\3.10.8\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
gyp ERR! cwd D:\home\site\wwwroot\node_modules\dtrace-provider
gyp ERR! node -v v6.9.1
gyp ERR! node-gyp -v v3.4.0
gyp ERR! not ok 
</code></pre>
<p>Also, the web site wasn't starting... Ghost console output was <code>Error: Cannot find module</code> at <code>npm install sqlite3</code></p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/ghost-sqlite3-error.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/ghost-sqlite3-error.png" alt="ghost-sqlite3-error" loading="lazy"></a></p>
<p>After some google-ing, I've realized that the problem was the node version, so I've set the <code>WEBISTE_NODE_DEFAULT_VERSION</code> to <code>8.9.0</code>, reran the <code>npm rebuild</code>, and then everything was working. I had a fresh installation of Ghost in Azure App Service.</p>
<h3 id="migrate-the-content-from-ghost-v078">Migrate the Content from Ghost v0.78</h3>
<p>Following the <a href="https://docs.ghost.org/docs/migrating-to-ghost-1-0-0?ref=oncodedesign.com#section-3-use-the-ghost-1-0-0-importer">migration guide</a> this should have been an easy two steps thing:</p>
<ol>
<li>Export the content from the old site</li>
<li>Import the content in the new one</li>
</ol>
<p>My problem was that the export from the old site didn't work. It didn't give any error in the <em>Settings</em> page, but when I ran it locally with verbose logging, I've seen the error in the console:</p>
<pre><code>ERROR: Cannot read property 'client' of undefined 
</code></pre>
<p>Somewhere was a bug in Ghost. Either when it migrated the DB from a previous version to 0.78, either in the export functionality.</p>
<p>The trick I did to get my content was the following:</p>
<ul>
<li>install a new Ghost site with version 0.11, locally</li>
<li>copy the <code>/content/data/ghost.db</code> from the old site to this v0.11 installation</li>
<li>run the v0.11 site
<ul>
<li>when it started it migrated the database to its version</li>
</ul>
</li>
<li>export the content from the <em>Settings\Labs</em> screen from the v0.11 site</li>
</ul>
<h3 id="add-the-theme-as-a-git-submodule">Add the Theme as a Git Submodule</h3>
<p>The next step was to add the onCodeDesign theme to my new Ghost 1.19 installation. There are usually two ways to do this:</p>
<ul>
<li>upload the theme using the <em>Settings\Design</em> screen as shown in the <a href="https://docs.ghost.org/docs/migrating-to-ghost-1-0-0?ref=oncodedesign.com#section-5-upload-your-theme">migration guide</a></li>
<li>copy the theme in git, in the <code>content\themes</code> folder and redeploy from GitHub</li>
</ul>
<p>I didn't choose any of these. The reason is that I already keep my theme in a git repository on visualstudio.com in <a href="https://www.visualstudio.com/team-services/?ref=oncodedesign.com">VSTS</a>, and I don't want to loose all the history on how it was developed and modified. None of the above were a good option to have an easy deployment of future changes on the theme, and also to keep the history in git.</p>
<p>My solution was to add the theme as a <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules?ref=oncodedesign.com">git submodule</a> to the GitHub repository of my blog. The submodule went into the <code>content\themes\oncode</code> folder, and it pointed to the same remote in VSTS. This, not only that preserved the history, but it also allowed me to keep the same repo for the theme which saved me some work.</p>
<p>The difficulty here was to setup the authentication to the theme repository on VSTS, which was not a public repo. When, Azure fetches the changes from the GitHub repo, it also needs to update the submodule, so it needs to have access to it. One solution would have been to use a SSH git URL to the submodule to the VSTS. This wasn't an option for my case because I use <a href="https://git-lfs.github.com/?ref=oncodedesign.com">git-lfs</a> (the theme repo has many images) and VSTS does not support git-lfs over SSH. Therefore, I had to configure a <a href="https://docs.microsoft.com/en-us/vsts/accounts/use-personal-access-tokens-to-authenticate?view=vsts&ref=oncodedesign.com">Personal Access Token</a> for a new user in VSTS, and keep a HTTPS git URL.</p>
<p>The biggest advantage of this setup is that now, in my local clone of the repo I can do and test any changes to the theme. When I'm done, I just push the changes to the <code>onCode</code> branch, I go in the Azure portal and make a <em>Sync</em> and the changes are up.</p>
<h3 id="upgrading-the-theme">Upgrading the Theme</h3>
<p>The next step was to upgrade the theme to work with Ghost 1.x. This was a straight forward process. When I've activated the theme, it gave me some errors that I had to fix.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/Ghost-theme-errors.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/03/Ghost-theme-errors.png" alt="Ghost-theme-errors" loading="lazy"></a></p>
<p>I've just followed the guide <a href="https://themes.ghost.org/docs/migrate-to-ghost-1-0-0?ref=oncodedesign.com">here</a> and having the environment locally everything went fast and smooth.</p>
<h3 id="copy-the-images">Copy the Images</h3>
<p>The next step was to copy the images from the old blog to the new one. This again, went smoothly. I've downloaded the images through FTP from the old site, and them I've uploaded them through FTP on the new one. The folders structure does not differ from v0.x to v1.x, so no issues here.</p>
<p>Also, most of my images are on <a href="https://cloudinary.com/?ref=oncodedesign.com">Cloudinary</a>, so I didn't have too many to copy from one place to the other.</p>
<h3 id="final-settings">Final Settings</h3>
<p>The last steps were to review the settings on the new installation. Things like title &amp; description, publication logo, publication icon (much nicer than on v0.x), publication cover etc.</p>
<p>To make the email settings you could put them in the <code>config.production.json</code> as shown here in the <a href="https://docs.ghost.org/docs/mail-config?ref=oncodedesign.com">docs</a>. However I prefer to use the App Service - Application settings, so I don't put my Sendgrid credentials on GitHub in a public repo.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/ghost-mail-settings.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/ghost-mail-settings.png" alt="ghost-mail-settings" loading="lazy"></a></p>
<h3 id="redirects">Redirects</h3>
<p>I do my redirect in the <code>web.config</code> as I've detailed in my post about migrating from Wordpress <a href="https://oncodedesign.com/my-wordpress-to-ghost-journey/#redirectfromwordpress">here</a>.</p>
<p>In the previous installation I didn't had a nice way to deal with these. I set them up using the <em>App Service Editor</em> and that was it. Luckily they didn't need changing. Now, with my new setup of deploying from GitHub, I've just copy them in the new <code>web.config</code> which is part of my repo and just redeploy.</p>
<p>Ghost v1.x has support for uploading the redirects in a <code>json</code> file (<em>Labs</em> screen), but I just prefer the above to have them in the git history.</p>
<h3 id="switch-the-dns-and-setup-the-ssl">Switch the DNS and Setup the SSL</h3>
<p>I run my site through <a href="https://www.cloudflare.com/?ref=oncodedesign.com">Cloudflare</a> for all the benefits in security (SSL, DDoS, etc) and performance.</p>
<p>This first thing was to go in Cloudflare and change the DNS setting to point to the new Azure App Service. Then, I've removed the domains from the old App Service and added them to the new one.</p>
<p>The <code>oncodedesign.com</code> got validated and was responding very fast. Almost instantaneously. I've added the SSL Binding using the Cloudflare certificate I already had setup in Azure and it was all good.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/azure-custom-domains.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/azure-custom-domains.png" alt="azure-custom-domains" loading="lazy"></a></p>
<p>The <code>www.oncodedesign.com</code> didn't get validated even if it was changed in the DNS. I waited a day, thinking that I should be patient with the DNS propagation, but it still didn't work. After some reading I've switched off the <code>HTTP proxy</code> leaving <code>DNS only</code> for the <code>www</code> record</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/cloudflare-dns-only.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/cloudflare-dns-only.png" alt="cloudflare-dns-only" loading="lazy"></a></p>
<p>This did the trick. The domain was validated by Azure, and I could setup the SSL Bindind with the same certificate. Afterwards, I've switched it back to <code>DNS and HTTP proxy</code> and everything went well. Probably something was cached, and the new DNS record didn't push through.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/cloudflare-dns-and-cdn.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/04/cloudflare-dns-and-cdn.png" alt="cloudflare-dns-and-cdn" loading="lazy"></a></p>
<h2 id="summary">Summary</h2>
<p>To summaries:</p>
<ul>
<li>I've migrated from Ghost v0.78 to v1.19 wanting to keep hosting in Azure</li>
<li>I wanted to get a easy way to future updates, so I looked into the Ghost CLI, but it doesn't play nice on Azure</li>
<li>I ended up using an automated deployment from GitHub on Azure App Service</li>
<li>I keep my theme development in its own git repo on VSTS, which is a git submodule in the main one</li>
<li>I ended up with a nice environment that allows me to dev &amp; test in my local clone  and then easily push to Azure</li>
<li>From now on the upgrades to the new versions should be easier given that I can keep my repo in sync with the one of Ghost or Ghost-Azure.</li>
</ul>
<p>If you find yourself doing such a migration and you get stuck don't hesitate to ask me, I might have gone through the same.</p>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Decide between In-Process or Inter-Process Communication at Deploy Time - Part 3 ]]>
            </title>
            <description>
                <![CDATA[ This is the last post from the series on how we could implement a design that allows to decide only at deploy time how two services communicate: in process if they are deployed on the same server or inter-process if they are on different machines. The first post shows where ]]>
            </description>
            <link>https://oncodedesign.com/blog/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-3/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba6</guid>
            <category>
                <![CDATA[ AppBoot ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Wed, 07 Mar 2018 09:11:53 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1520361927/InterProc-Communication-part3_eiljta.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>This is the last post from the series on how we could implement a design that allows to decide only at deploy time how two services communicate: in process if they are deployed on the same server or inter-process if they are on different machines. The <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time/">first post</a> shows where such a design is useful and what are the benefits it brings. The <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/">second one</a>, takes an example from the financial systems world, and outlines the key design ideas on how to implement it. In this post we'll only focus on code. We'll take the example presented in the previous post and code it from scratch in C#.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/services-communication.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/services-communication.png" alt="financial services" loading="lazy"></a></p>
<p>Dependency Injection is the base technique to implement all the key design ideas detailed in the <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/">previous post</a>. In this demo,  we'll use the <a href="https://github.com/iquarc/appboot?ref=oncodedesign.com">iQuarc.AppBoot</a>, which brings on top of a classic DI Container, the support for implementing simple conventions for the <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#typediscovery">type discovery</a> and the <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#proxies">proxies</a>. (More on <em>iQuarc AppBoot Library</em> <a href="https://oncodedesign.com/app-boot-library/">here</a>)</p>
<p>The entire source code that we are going to walk through is available on Github, <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/InterProcessCommunication/TradingApp?ref=oncodedesign.com">here</a>, as part of my <a href="https://oncodedesign.com/training-code-design/">Code Design training</a>. Each implementation step is marked with a tag in the git repo. We have <code>ipc-step0</code> for the starting point and so on, until <code>ipc-step10b</code>. You can see how the implementation evolves step by step.</p>
<h2 id="runningthedemo">Running the Demo</h2>
<p>If we checkout the final step (<code>ipc-step10b</code>), we have what we wanted in the <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/">previous post</a>: the ability to change the communication between the <code>OrdersService</code> and <code>QuotationService</code> to be in-process without changing code or recompile.</p>
<p>To get that running, after you build the <code>TradingApp.sln</code> you can find into the <code>bin\.Deploy</code> folder different deployments configurations (there are some post build events that copy the binaries in these folders). If you run the <code>startAll.bat</code> you will start three console processes, one for each of our three services from the example. They are all hosted by themselves, and each of them listens for REST calls on a different port (I've mentioned previously that we're implementing the example using REST services). Now, we can open <a href="https://www.getpostman.com/?ref=oncodedesign.com">Postman</a> (or other similar tool) and fire some calls to them:</p>
<pre><code>http://localhost:9002/api/Quotation/GetByExchange?exchange=NYSE&amp;instrument&amp;from=2017-01-01&amp;to=2017-01-01
</code></pre>
<p>this gets from the <code>QuotationService</code> the quotations for the NYSE in a time window. You can notice that each console writes the calls it received and the remote that made it, so we can see what happens in our demo app.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520259973/print-screen-1.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520259973/print-screen-1.png" alt="Quotation Service" loading="lazy"></a></p>
<p>If we make a POST to the <code>OrdersService</code>, to place an order:</p>
<pre><code>http://localhost:9003/api/Orders/PlaceSellLimitOrder?securityCode=AAPL.S.NASDAQ&amp;sellingPrice=11.45&amp;validUntil=2017-01-01
</code></pre>
<p>we see the call logged into the console of the process that hosts the <em>OrdersService</em> (the remote is a Postman-Token outlined with yellow below, as for the previous call)</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520272606/print-screen-2.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520272606/print-screen-2.png" alt="Orders Service Console" loading="lazy"></a></p>
<p>and further more, we can see another REST call made by the <code>OrdersService</code> to the <code>QuotationService</code>. We see it logged in the console of the process that loaded the <em>Quotation Module</em> (the one at the bottom below). We see that the remote is a Service-Proxy with the GUID of the proxies loaded by the process that hosts the <em>Sales Module</em>, as outlined with red below. This means that the one that made the call to the <em>Quotation Module</em> host was the <em>Sales Module</em> host.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520272844/print-screen-3.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520272844/print-screen-3.png" alt="Inter-Process Communication" loading="lazy"></a></p>
<p>So we had an inter-process communication, the <code>SalesService</code> making a REST call to the <code>QuotationService</code>, when someone else made a call to it.</p>
<p>Now, lets change the communication. We first close the process with the <em>Sales Module</em>. Then we copy in its output folder (<code>bin\Sales\</code>) the <code>QuotationServices.dll</code>. This is the binary that has the implementation of the <code>QuotationService</code>. Mainly we've changed the deployment configuration, by having now in the same output folder the binaries with implementations of the <code>SalesService</code> and of the <code>QuotationService</code>. Now, we just restart the host by executing the <code>bin\Sales\ConsoleHost.exe</code>. We see that it loaded both <em>Quotation Module</em> and <em>Sales Module</em>.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520274502/print-screen-4.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520274502/print-screen-4.png" alt="Qutoation and Sales Modules Console" loading="lazy"></a></p>
<p>The other two console hosts remain running.</p>
<p>Now, if we repeat the same POST from Postman to the <code>SalesService</code></p>
<pre><code>http://localhost:9003/api/Orders/PlaceSellLimitOrder?securityCode=AAPL.S.NASDAQ&amp;sellingPrice=11.45&amp;validUntil=2017-01-01
</code></pre>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520274502/print-screen-5.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520274502/print-screen-5.png" alt="In-Process Communication Console" loading="lazy"></a></p>
<p>we see that no other inter-process call is made. Noting gets logged in the other consoles. The <code>SalesService</code> did call the <code>QuotationService</code> since it depends on it, but the call was just a function call in the process that hosts them both.</p>
<p>So, only by the fact that we've changed the deployment by copying the <code>QuotationsService.dll</code> into the same <code>\bin</code> folder with the <code>SalesService.dll</code> and then restart the host process to load them both, we have switched from a inter-process communication to an in process communication.</p>
<p>We started with a deployment like below,</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517836658/distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517836658/distributed-deployment.png" alt="distributed deployment" loading="lazy"></a></p>
<p>where each service was hosted in its own console host and an external call to the <code>OrdersService</code> triggered an inter-process communication with the <code>QuotationServce</code>, and we ended with a deployment like below</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517836909/semi-distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517836909/semi-distributed-deployment.png" alt="partly distributed deployment" loading="lazy"></a></p>
<p>where <code>OrdersService</code> and <code>QuotationService</code> are hosted by the same process and an external call to the <code>OrdersService</code> can by satisfied with an in process communication with the <code>QuotationService</code>.</p>
<h2 id="writingthecode">Writing the Code</h2>
<p>If we checkout the <code>ipc-step0</code> we have the solution structure to start with. At this step all the projects are empty.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520277821/empty-solution_ovjwwo.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_350/empty-solution_ovjwwo.png" alt="Empty Solution" loading="lazy"></a></p>
<p>We have a console project (<code>ConsoleHost</code>), which will be our <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#genericprocesshost">generic host</a>. Then we have class library projects where we'll implement the services, and another one for the contracts.</p>
<p>The solution structure is a key element of our design and I have detailed in the previous post <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#solutionstructure">here</a>, the basic rules we want to enforce with it.</p>
<h4 id="basiccodeforcontractsandservices">Basic Code for Contracts and Services</h4>
<p>Next, if we checkout <code>ipc-step1</code> we have the contracts of the <code>QuotationService</code>, an interface for the service contract and a DTO for the data contract:</p>
<pre><code class="language-language-csharp">public interface IQuotationService
{
    Quotation[] GetQuotations(string exchange, string instrument, DateTime from, DateTime to);
    Quotation[] GetQuotations(string securityCode, DateTime from, DateTime to);
}

public class Quotation
{
    public DateTime Timestamp { get; set; }
    public decimal BidPrice { get; set; }
    public decimal AskPrice { get; set; }
    public string SecurityCode { get; set; }
}
</code></pre>
<p>Next, we write a simple implementation for it, as a class in the <code>Quotation.Services</code> project (checkout <code>ipc-step2</code>):</p>
<pre><code class="language-language-csharp">class QuotationService : IQuotationService
{
    private readonly Quotation[] array = 
        {
            new Quotation {AskPrice = 10.50m, BidPrice = 10.55m, SecurityCode = &quot;ING.S.NYSE&quot;},
            new Quotation {AskPrice = 12.50m, BidPrice = 12.55m, SecurityCode = &quot;ING.B.NYSE&quot;},
            ...
        };
    
    public Quotation[] GetQuotations(string exchange, string instrument, DateTime @from, DateTime to)
    {
        var result = array.Where(q =&gt; q.SecurityCode.Contains(exchange));

        if (!string.IsNullOrWhiteSpace(instrument))
            result = result.Where(q =&gt; q.SecurityCode.Contains(instrument));

        return result.ToArray();
    }

    public Quotation[] GetQuotations(string securityCode, DateTime @from, DateTime to)
    {
        return array.Where(q =&gt; q.SecurityCode == securityCode).ToArray();
    }
}
</code></pre>
<p>Similar, we add the contract and a simple implementation for the <code>OrderingService</code> in the <code>Sales.Services</code> project (checkout <code>ipc-step3</code>). Notice here the dependency to the <code>IQuotationService</code>:</p>
<pre><code class="language-language-csharp">public class OrdersService : IOrdersService
{
    private readonly IQuotationService quotationService;
    public OrdersService(IQuotationService quotationService)
    {
        this.quotationService = quotationService;
    }

    public void PlaceSellLimitOrder(string securityCode, decimal sellingPrice, DateTime validUntil)
    {
        var todayQuotations = quotationService.GetQuotations(securityCode, DateTime.Today.AddDays(-1), DateTime.Today);
        foreach (var quotation in todayQuotations)
        {
            if (quotation.AskPrice &gt;= sellingPrice)
                limitOrders.Add(new LimitOrder
                {
                    SecurityCode = securityCode,
                    PlacedAt = DateTime.UtcNow,
                    Type = OrderType.Sell,
                    Price = sellingPrice,
                    ValidUntil = validUntil
                });
        }
    }
...
}
</code></pre>
<p>Similar at <code>ipc-step4</code> we have the <code>PortfolioService</code> implementation in the <code>Portfolio.Services</code> project. It also has a dependency to the <code>IQuotationService</code></p>
<p>As pointed in the previous post <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#dependoncontracts">here</a> it is a critical requirement for this design that the dependencies among the services are towards interfaces. So, the <code>OrdersService</code> class depends on the <code>IQuotationService</code> interface and not on the <code>QuotationService</code> class. The same is for the <code>PortfolioService</code>.</p>
<h4 id="addiquarcappboot">Add iQuarc.AppBoot</h4>
<p>At this point we have service contracts and implementations for them but they are not linked in any way. If we look at the <em>Project Dependencies Diagram</em> we see that we have no references nor strong dependencies among them.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520322565/project-dependency-diagram_fjvxp8.jpg?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520322565/project-dependency-diagram_fjvxp8.jpg" alt="Project Dependency Diagram" loading="lazy"></a></p>
<p>In the previous post <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#solutionstructure">here</a>, we've detailed why this is important.</p>
<p>The iQuarc.AppBoot will bring things together. We'll use it for <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#typediscovery">type discovery</a> to find the service implementations deployed, and then to satisfy the dependencies by configuring the underlying DI Container.</p>
<p>So, the next step is to install and configure it. We can install it as a NuGet Package to all the projects under the <code>\Modules</code> folder.</p>
<pre><code>PM&gt; Install-Package iQuarc.AppBoot -Version 2.0.3-disposables002
</code></pre>
<p>The <code>iQuarc.AppBoot</code> package doesn't have dependencies to any DI Container, and it is not a container by itself. It uses a container by an abstraction. It is good to also keep our modules (the place where we implement the business logic of our system) independent of a specific DI framework.</p>
<p>However, the host project needs to use a specific container, so for it only, we will install the <code>iQuarc.AppBoot.Unity</code>, which adapts the <a href="https://github.com/unitycontainer/unity?ref=oncodedesign.com">Unity Dependency Injection Container</a> to AppBoot.</p>
<pre><code>PM&gt; Install-Package iQuarc.AppBoot.Unity -Version 2.0.3
</code></pre>
<p>The <code>ConsoleHost</code> is our generic server process. It should find all the services deployed, then publish them to be called from other processes and configure the dependencies. In a real system this would be an web app in IIS or a Cloud Service, or a Windows Service etc. For this demo it is a simple console app.</p>
<p>To use AppBoot to find the deployed services, first we need to decorate all the service implementations with the <code>ServiceAttribute</code>.</p>
<pre><code class="language-language-csharp">[Service(typeof(IQuotationService))]
class QuotationService : IQuotationService
{
...
}
</code></pre>
<p>With this we declare that the <code>QuotationService</code> class is the implementation of the <code>IQuotationService</code> interface. The same goes for the other service we have implemented in the <code>\Modules</code> folder.</p>
<pre><code class="language-language-csharp">[Service(typeof(IPortfolioService))]
public class PortfolioService : PortfolioService
{
...
}

[Service(typeof(IOrdersService))]
public class OrderingService : IOrdersService
{
...
}
</code></pre>
<p>Now, as a first step, when the <code>ConsoleHost</code> starts, in its <code>main()</code> it will call <code>AppBoot.Bootstrapper.Run()</code> which will scan through reflection all the binaries in the output folder and look after the types decorated with the <code>ServiceAttribute</code>. The ones that are found are registered in the DI Container as pairs of interface - implementation. This configuration is in the helper class <code>AppBootBootstrapper</code>:</p>
<pre><code class="language-language-csharp">public static class AppBootBootstrapper
{
    public static Bootstrapper Run()
    {
        var assemblies = GetApplicationAssemblies().ToArray();  //returns all the assemblies that should be scanned (filters out Microsoft.*, System.* and others)
        Bootstrapper bootstrapper = new Bootstrapper(assemblies);
        bootstrapper.ConfigureWithUnity(); // says to AppBoot to use Unity Container

        // configures the registration convention, by saying to use the ServiceAttribute convention
        bootstrapper.AddRegistrationBehavior(new ServiceRegistrationBehavior()); 

        bootstrapper.Run();
        return bootstrapper;
    }
...
}
</code></pre>
<p>To get this running we need to make sure that at build all the binaries are put in the same folder with the <code>ConsoleHost</code>. This helps in debug, and is the place where we can come later to define different deployment configurations. The easiest is to set the build output path for all the projects. Here is how it should be for a <code>*.Services</code> project:</p>
<pre><code>Output path =  ..\..\..\bin\Debug\
</code></pre>
<p>So at the root level, we'll have a <code>\bin</code> folder where all the projects get build.</p>
<h4 id="selfhostaspnetwebapi">Self-Host ASP.NET Web API</h4>
<p>For our demo purpose this is not that important. We just need to make the services available for REST calls, so a simple self host in the console app of the Web Api will do.</p>
<pre><code class="language-language-csharp">static void Main(string[] args)
{
    string baseAddress = &quot;http://localhost:9000/&quot;;
    using (WebApp.Start&lt;Startup&gt;(url: baseAddress))
    {
        Console.WriteLine($&quot;Server runs at: {baseAddress}&quot;);
        Console.WriteLine(&quot;Press ESC to stop the server\n&quot;);

        ConsoleKeyInfo keyInfo;
        do
        {
            keyInfo = Console.ReadKey();
        }
        while (keyInfo.Key != ConsoleKey.Escape);
    }
}

public class Startup
{
    // This code configures Web API. The Startup class is specified as a type
    // parameter in the WebApp.Start method.
    public void Configuration(IAppBuilder appBuilder)
    {
        // Configure Web API for self-host. 
        HttpConfiguration config = new HttpConfiguration();
        config.Routes.MapHttpRoute(name: &quot;DefaultApi&quot;, routeTemplate: &quot;api/{controller}/{action}&quot;);

        // kicks the AppBoot Bootstrapper
        AppBootBootstrapper.Run().ConfigureWebApi(config);

        appBuilder.UseWebApi(config);
    }
}
</code></pre>
<p>We see that at startup we kick the <code>AppBootBootstrapper.Run()</code> and then we call the <code>ConfigureWebApi()</code> helper. This configures the WebApi to use the AppBoot DI Container to inject the dependencies into the controller. Our controllers are trivial:</p>
<pre><code class="language-language-csharp">public class OrdersController : ApiController
{
    private readonly IOrdersService ordersService;
    public OrdersController(IOrdersService ordersService)
    {
        this.ordersService = ordersService;
    }

    public IHttpActionResult PlaceSellLimitOrder(string securityCode, decimal sellingPrice, DateTime validUntil)
    {
        ordersService.PlaceSellLimitOrder(securityCode, sellingPrice, validUntil);
        return Ok();
    }
...
</code></pre>
<p>They get through constructor DI the service they should make available through REST and they just forward the calls to it.</p>
<p>To make our <code>ConsoleHost</code> truly generic we should make a generic <code>ApiController</code> which gets the service through DI and forwards the call to it, and then we should customize the WebApi to use through some conventions our generic controller. Definitely it could be done, but it doesn't worth for our demo, so we just create three dummy controllers for each of our services.</p>
<h4 id="afatserver">A Fat Server</h4>
<p>Now if we checkout <code>ipc-step7b</code> we have all of the above implemented. If we execute the <code>ConsoleHost.exe</code> we see that it loaded all the three modules, because they were all copied in this <code>\bin</code> folder. If we make the same POST call from Postman to the <code>OrdersService</code> it works and it will do an in process communication with the <code>QuotationService</code>.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520329819/print-screen-6_bm8xwb.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520329819/print-screen-6_bm8xwb.png" alt="Fat Server Console" loading="lazy"></a></p>
<p>Everything is hosted in one process, like a fat server, which does not scale.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517837066/not-distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517837066/not-distributed-deployment.png" alt="not distributed deployment" loading="lazy"></a></p>
<p>If we delete the <code>Quotations.Services.dll</code> and run again the <code>ConsoleHost.exe</code> we see that it only loaded the two remaining modules: <em>Portfolio</em> and <em>Sales</em>. Now if we make the same POST call from Postman it returns a <code>500 Internal Server Error</code> because the <code>OrdersService</code> did not find an implementation for its dependency to the <code>IQuotationService</code>.</p>
<pre><code>http://localhost:9003/api/Orders/PlaceSellLimitOrder?securityCode=AAPL.S.NASDAQ&amp;sellingPrice=11.45&amp;validUntil=2017-01-01
</code></pre>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1520330631/print-screen-7_ckipie.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1520330631/print-screen-7_ckipie.png" alt="QuotationService Missing Console" loading="lazy"></a></p>
<h4 id="implementtheproxies">Implement the Proxies</h4>
<p>What we wanted above, is that when a dependency is not found in the same process to make a REST call to another process where that dependency is hosted. To make this happen, we need to create other implementations to the interfaces describing our contracts, which know how to forward the call over REST. These are the <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/#proxies">Proxies</a>.</p>
<p>The proxies are part of the infrastructure. They are just plumbing code, no business logic, and we want them deployed on all our <code>ConsoleHost</code> instances to be the fail over when a real service implementation was not deployed. Therefore, we put them in the <code>\Infrastructure</code> folder.</p>
<p>Their code is not that important for our demo</p>
<pre><code class="language-language-csharp">class QuotationServiceProxy : IQuotationService
{
    public Quotation[] GetQuotations(string exchange, string instrument, DateTime @from, DateTime to)
    {
        using (HttpClient client = HttpHelpers.CreateNewClient&lt;IQuotationService&gt;())
        {
            string path = HttpHelpers.GetServicePath&lt;IQuotationService&gt;(&quot;GetByExchange&quot;);
            string uri = $&quot;{path}?exchange={exchange}&amp;instrument={instrument}&amp;from={from}&amp;to={to}&quot;;
            HttpResponseMessage response = client.GetAsync(uri).Result;
            if (response.IsSuccessStatusCode)
            {
                Quotation[] value = response.Content.ReadAsAsync&lt;Quotation[]&gt;().Result;
                return value;
            }

            throw new HttpException((int)response.StatusCode, response.Content.ReadAsStringAsync().Result);
        }
    }
...
}
</code></pre>
<p>They use a <code>HttpClient</code>, they build the URL to the service they want to call, based on some conventions, and they make the REST call. The response is then returned to the caller. We can checkout <code>icp-step8b</code> to have the proxies for all our services.</p>
<h4 id="implementtheserviceproxyattribute">Implement the <code>ServiceProxyAttribute</code></h4>
<p>Now, we have two implementations for each of our contracts: the proxy and the real service implementation. The proxies are not yet used by the AppBoot. If we were to also decorate them with the <code>ServiceAttribute</code> as we did for the service implementations, the AppBoot will register both implementations for one interface and the last one found will overwrite the first. This is not deterministic, so it is not good. When a service is not deployed, as we've seen <a href="#afatserver">above</a>, sometimes things will work, sometimes not, depending on which type was found last by the AppBoot while it scanned the deployed assemblies.</p>
<p>To fix this, we need to extend the conventions AppBoot uses. The <code>ServiceAttribute</code> we've used so far, was just a convention that says to register to DI interface - implementation pairs decorated with this attribute. We can create a new attribute say <code>ServiceProxyAttribute</code> and register a new convention to AppBoot.</p>
<pre><code class="language-language-csharp">[AttributeUsage(AttributeTargets.Class)]
public sealed class ServiceProxyAttribute : Attribute
{
    public ServiceProxyAttribute(Type exportType)
    {
        ExportType = exportType;
    }
    public Type ExportType { get; private set; }
}

public sealed class ServiceProxyRegistrationBehavior : IRegistrationBehavior
{
    public IEnumerable&lt;ServiceInfo&gt; GetServicesFrom(Type type)
    {
        IEnumerable&lt;ServiceProxyAttribute&gt; attributes = type.GetAttributes&lt;ServiceProxyAttribute&gt;(false);
        return attributes.Select(a =&gt; new ServiceInfo(a.ExportType, type, string.Empty, Lifetime.AlwaysNew));
    }
}
</code></pre>
<p>The convention is implemented as a <code>IRegistrationBehavior</code>, where we just return what should be registered to DI from a type that was found during the assemblies scanning. Checkout <code>ipc-step9a</code> to get these implemented and have all the proxies decorated with the <code>ServiceProxyAttribute</code>.</p>
<p>To make AppBoot use the new convention we go into the <code>AppBootBootstrapper</code> class and in the <code>Run()</code> function we add  to the <code>bootstrapper</code> an instance of the <code>ServiceProxyRegistrationBehavior</code></p>
<pre><code class="language-language-csharp">public static Bootstrapper Run()
{
    var assemblies = GetApplicationAssemblies().ToArray();
    Bootstrapper bootstrapper = new Bootstrapper(assemblies);
    bootstrapper.ConfigureWithUnity();
    bootstrapper.AddRegistrationBehavior(new ServiceProxyRegistrationBehavior());
    bootstrapper.AddRegistrationBehavior(new ServiceRegistrationBehavior());

    bootstrapper.Run();
    return bootstrapper;
}
</code></pre>
<p>The order we add the registration behaviors to the bootstrapper matters. The latter overwrites the registrations returned by the previous. So we add first the <code>ServiceProxyRegistrationBehavior</code> which puts into the DI Container proxies as implementations for all the contracts. If we'd leave it like this, all the service calls will be REST calls to other processes through the proxies. The <code>ServiceRegistrationBehavior</code> is the second, so it overwrites the proxies registrations with the real implementation for the services that were deployed. This means, that if the real implementation of a service was deployed, thus found at the assemblies scan, it will overwrite the proxy implementation, therefore the call to that service will be direct to real implementation in the current process. If it is not found, the proxy registration is not overwritten and the call will be done in another process.</p>
<p>With these done (checkout <code>ipc-step10</code>), we have the design completely implemented. Depending on what <code>*.Services.dll</code> are copied at deploy time into the <code>\bin</code> of a <code>ConsoleHost.exe</code> we have in processes communication or inter-process communication. Checkout <code>ipc-step10a</code> to get all the configurations tweaked for an easy run, and then you can re-run the demo as we did in the <a href="#RunningtheDemo">beginning of the post</a>. You can play with different deployment configurations by copying the <code>*.Services.dll</code> in different <code>ConsoleHost.exe</code> instances. Here's another deployment configuration</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_700,h_665,c_pad,g_north/l_text:PT%20Sans_18:QuotationService%20co-hosted%20with%20services%20called%20from%20UI,g_south,co_rgb:333333/quotation-service-cohosted_bd4166.png" alt="UI Console makes Calls to a Distributed Deployment" loading="lazy"></p>
<h5 id="thisdesignispartofmycodedesigntrainingwhereyouwilllearnhowtoimplementitinyourcontextandtomaximizethebenefitsforyourrequirements">This design is part of my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a> where you will learn how to implement it in your context and to maximize the benefits for your requirements</h5>
<h6 id="featuredimagecreditdenysrudyivia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_deniskot?ref=oncodedesign.com">DENYS Rudyi via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Decide between In-Process or Inter-Process Communication at Deploy Time - Part 2 ]]>
            </title>
            <description>
                <![CDATA[ This post continues the previous by giving an example on how we could implement a design that allows to decide only at deploy time how services communicate: in process if they are deployed on the same server or inter-process if they are on different machines. If you haven&#39;t ]]>
            </description>
            <link>https://oncodedesign.com/blog/decide-between-in-process-or-inter-process-communication-at-deploy-time-part-2/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba5</guid>
            <category>
                <![CDATA[ AppBoot ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 13 Feb 2018 08:46:11 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/02/40542540_m.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>This post continues <a href="https://oncodedesign.com/decide-between-in-process-or-inter-process-communication-at-deploy-time/">the previous</a> by giving an example on how we could implement a design that allows to decide only at deploy time how services communicate: in process if they are deployed on the same server or inter-process if they are on different machines. If you haven't read the previous post, you should go through it to get the context and where such a design is useful. Now, we are going to implement it for a simple example.</p>
<p>As stated previously there are three key design ideas we're going to put into practice:</p>
<ol>
<li><strong>Depend</strong> only <strong>on Contracts</strong>, which are expressed by abstract types (interfaces)</li>
<li>Use <strong>Proxies</strong> to forward the call to a contract to the actual implementation</li>
<li>Use <strong>Type Discovery</strong> to determine what implementations were deployed on each process</li>
</ol>
<p>Let's pick an example inspired from a financial system. We need some services that depend one on the other, so we have some communication to play with. Say we have a <code>PortfolioService</code> which can get the current value of a portfolio, then we have an <code>OrdersService</code> to place buy or sell orders and we also have a <code>QuotationService</code> for getting quotations. The <code>PortfolioService</code> and the <code>OrdersService</code> call (depend on) the <code>QuotationService</code>, like in below diagram</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/services-communication.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/services-communication.png" alt="financial services" loading="lazy"></a></p>
<p>To keep things simple, we say that each of these services is implemented by one class. Notice, that each of the classes in the diagram implement a correspondent interface, which represents the contract of the service.</p>
<p>In one deployment we would want that <code>OrdersService</code> directly calls (in the same process as a function call) the <code>QuotationService</code> (the blue arrow) and the <code>PortfolioService</code> makes a inter-process communication to call the <code>QuotationService</code> (green arrow). Then, without changing any code we'd want to change this and have all to use inter-process communication or all in-process communication.</p>
<h3 id="dependoncontracts">Depend on Contracts</h3>
<p>Another important thing to notice is that the <code>PortfolioService</code> and the <code>OrdersService</code> depend on the interface <code>IQuotationService</code> and not on the implementation class. This is an important constraint that our design employs.</p>
<p>To enforce this constraint we will place all the public contracts in a separate assembly (<code>Contracts</code>) which will only have contracts and which will be available on all the servers where we'll deploy. This <code>Contracts</code> assembly may be referenced by any service implementation from any module. This allows any service to consume any service, but it will depend on its contract and not on its implementation. In fact it will not know if it will talk to the actual implementation or to a proxy which forwards its call to another process on a different server. The <code>IQuotationService</code> contract will be like:</p>
<pre><code class="language-language-csharp">public interface IQuotationService
{
    Quotation[] GetQuotations(string exchange, string instrument, DateTime from, DateTime to);
    Quotation[] GetQuotations(string securityCode, DateTime from, DateTime to);
}

public class Quotation
{
    public DateTime Timestamp { get; set; }
    public decimal BidPrice { get; set; }
    public decimal AskPrice { get; set; }
    public string SecurityCode { get; set; }
}
</code></pre>
<br/>
### Generic Process Host
<p>An important building block of this design is to have a process which is able to host any of the services we may have. It should be able to:</p>
<ul>
<li>host one or more services</li>
<li>publish these services to be consumed from another processes on another server</li>
<li>discover at startup which services were deployed, host and publish them</li>
</ul>
<p>In our simple example we will build it as a console app and we'll publish the services as REST endpoints. In a real distributed application on Azure this may be a Cloud Service or it could leverage the benefits of the Azure Service Fabric. On premises it may be a Windows Service.</p>
<p>Now in a deployment like this</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517836658/distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517836658/distributed-deployment.png" alt="distributed deployment" loading="lazy"></a></p>
<p>where we have each service hosted alone, in its own process, they will all do inter-process communication. In a real system, such a deployment where everything is self hosted may be the first one we try out. It maximizes scalability. Then based on collected metrics we may group things to reduce the communication overhead.</p>
<p>For our example, let's assume that we observe that the <code>OrdersService</code> and the <code>QuotationService</code> have a very intense communication and that the communication overhead is significant. We can deploy a copy of the <code>QuotationService</code> in the same place with the <code>OrdersService</code> and load them in the same host like this</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517836909/semi-distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517836909/semi-distributed-deployment.png" alt="partly distributed deployment" loading="lazy"></a></p>
<p>Now these two will communicate in the same process, only through function calls, and the <code>PortfolioService</code> will continue to use inter-process calls to a <code>QuotationService</code> instance that remained hosted individually.</p>
<p>Even more, for testing purposes maybe, we could deploy all the services in the same place and load them all in the same process like below.</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517837066/not-distributed-deployment.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517837066/not-distributed-deployment.png" alt="not distributed deployment" loading="lazy"></a></p>
<p>We can make all these new deployments, without making any code change.</p>
<h3 id="solutionstructure">Solution Structure</h3>
<p>The way we organize the code in a Visual Studio solution (or into folders) and how we allow references to be created is a critical aspect of this design. We want to gain a huge flexibility at deployment, so we need loose, well managed and controlled dependencies. The folders structure should lay this out. Also we should have clear rules on which assembly can be referenced by whom.</p>
<p>Here is a view of the structure for our example.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_350/solution-structure.png" alt="solution structure" loading="lazy"></p>
<p>The first level of separation is outlined by the root folders: <code>Infrastructure</code> and <code>Modules</code>.</p>
<p>The <code>Infrastructure</code> will contain code that has nothing to do with the functional use-cases and the business logic of the application. The code here is to implement the non-functional requirements and to support the implementation of the business logic. Here we have the <code>Proxies</code>, the utilities for the host process and some extensions for the <code>AppBoot</code>.</p>
<p>The <code>Modules</code> folder, on the other hand, contains the implementation of the functional use cases. Here is where the business logic is. Within it, we have a second level of separation: the functional modules, each represented by its own folder. We also have here the <code>Contracts</code> assembly which has the functional contracts among the modules, again separated in its own folders and namespaces.</p>
<p>If we look at the references or dependencies in below diagram,</p>
<p><a href="https://res.cloudinary.com/oncodedesign/image/upload/v1517842703/references.png?ref=oncodedesign.com"><img src="https://res.cloudinary.com/oncodedesign/image/upload/v1517842703/references.png" alt="assemblies references" loading="lazy"></a></p>
<p>we see that <strong>we don't have nor allow references between the modules</strong>. This is important if we want to be able to deploy them in separate processes. They all depend on the <code>Contracts</code>, and they may also depend on the <code>Infrastucture</code>. The <code>Contracts</code> and the <code>Infrastructure</code> binaries will be deployed and available on all the servers where we deploy.</p>
<h3 id="proxies">Proxies</h3>
<p>The proxies are another key element of this design. Each interface from the <code>Contacts</code> assemblies will have at least two types of implementations: the real implementation and the proxies implementations. The proxies will just forward the call to the real implementation.</p>
<p>In our example we use REST for inter-process communication, so the proxy will create a <code>HttpClient</code>, will call the REST endpoint and will return the result. Here's an example for the <em>QuotationService REST Proxy</em>:</p>
<pre><code class="language-language-csharp">class QuotationServiceProxy : IQuotationService
{
    public Quotation[] GetQuotations(string exchange, string instrument, DateTime @from, DateTime to)
    {
        using (HttpClient client = HttpHelpers.CreateNewClient&lt;IQuotationService&gt;())
        {
            string path = HttpHelpers.GetServicePath&lt;IQuotationService&gt;(&quot;GetByExchange&quot;);
            string uri = $&quot;{path}?exchange={exchange}&amp;instrument={instrument}&amp;from={from}&amp;to={to}&quot;;
            HttpResponseMessage response = client.GetAsync(uri).Result;
            if (response.IsSuccessStatusCode)
            {
                Quotation[] value = response.Content.ReadAsAsync&lt;Quotation[]&gt;().Result;
                return value;
            }

            throw new HttpException((int)response.StatusCode, response.Content.ReadAsStringAsync().Result);
        }
    }
...
</code></pre>
<p>We should have at least two types of proxies:</p>
<ul>
<li><em>Inter-Process Proxies</em>, which forward the call to another process using an inter-process communication protocol (like the one in the above example), and</li>
<li><em>In-Process Proxies</em>, which forward the call to the real implementation in the same process (basically this is just a wrapper over the real implementation)</li>
</ul>
<p>In our implementation example, we will skip the <em>In-Process Proxies</em> and use directly the real implementation. However, in a real application it is needed because we need to have a consistent contract implementation from the caller perspective, no matter if it calls in process or inter-process.</p>
<p>For example, let's assume that the real implementation of the <code>IQuotationService.GetQuotations()</code> throws under certain conditions (a bug maybe) <code>IndexOutOfBoundsException</code>. If it is called in the same process directly, the caller will get this exception. At the same time, if it is called through the <em>Inter-Process Proxy</em> the caller will get an <code>HttpException</code>. This is not good. The caller calls an interface and we want it not to care if it calls in process or inter-process. In this example the fault contract is not consistent.</p>
<p>So the proxies should wrap the implementation and make sure that the public contracts in the <code>Contracts</code> assembly are consistently implemented for both in-process and inter-process calls.</p>
<h3 id="typediscovery">Type Discovery</h3>
<p>Having only dependencies against interfaces, Dependency Injection is a handy technique to get this duality on using an <em>In-Process Proxy</em> or an <em>Inter-Process Proxy</em> to get to the real implementation of a service we depend on.</p>
<p>At startup, before it is ready to receive requests, the host process (the <code>ConsoleHost</code> in our example) will scan all the deployed binaries in its output folder to determine which services were deployed and which of them should be published to be called from other processes. Using conventions, it will do the proper configuration of the Dependency Injection Container.</p>
<p>For example if for a contract it has the real implementation deployed it will register into the container a <em>In-Process Proxy</em> that will call it. If it doesn't have the real implementation it will register the <em>Inter-Process Proxy</em>, which will know haw to forward the call through HTTP, in our case to the service implementation from another process.</p>
<p>Also discovering which services are deployed, the host process may use some other conventions to know which to publish get calls from other processes. The easiest convention is to publish all deployed services.</p>
<h3 id="summary">Summary</h3>
<p>As a continuation of the previous post, we have defined an example to show how to implement this design. We have detailed the key building blocks on it: <a href="#dependoncontracts">Depend on Contracts</a>, <a href="#genericprocesshost">Generic Process Host</a>, <a href="#solutionstructure">Solution Structure</a>, <a href="#proxies">Proxies</a> and <a href="#typediscovery">Type Discovery</a>. Now we should have a good idea on how to implement this.</p>
<p>In the next post, we will continue with the implementation, focusing on the code of the <code>ConsoleHost</code> and the <code>Proxies</code> to get to an example that we can run and with which we can have different deployments for these three services without changing their code.</p>
<h5 id="byattendingmycodedesigntrainingyouwilllearnhowtoimplementthisdesigninyourcontexttomaximizethebenefitsforyourrequirements">By attending my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a> you will learn how to implement this design in your context, to maximize the benefits for your requirements</h5>
<h6 id="featuredimagecreditbluebayvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_stuartphoto?ref=oncodedesign.com">bluebay via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Decide between In-Process or Inter-Process Communication at Deploy Time ]]>
            </title>
            <description>
                <![CDATA[ After a long vacation in October last year, followed by some intensive work at MIRA and InfiniSwiss, now I can make some time to share some more design ideas that I have implemented along the years.


I thought to resume blogging with showing how we could design for something which ]]>
            </description>
            <link>https://oncodedesign.com/blog/decide-between-in-process-or-inter-process-communication-at-deploy-time/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba4</guid>
            <category>
                <![CDATA[ AppBoot ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 23 Jan 2018 08:45:18 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2018/01/90940658_m.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>After a long vacation in October last year, followed by some intensive work at <a href="http://www.mirarehab.com/?ref=oncodedesign.com">MIRA</a> and <a href="http://www.infiniswiss.com/?ref=oncodedesign.com">InfiniSwiss</a>, now I can make some time to share some more design ideas that I have implemented along the years.</p>
<p>I thought to resume blogging with showing how we could design for something which sounds quite amazing:</p>
<blockquote>
<p>Without changing any code nor any configuration file, without recompiling, just by copying binaries on the same or on different servers, we can change how same two classes communicate: in the same process or inter-process communication.</p>
</blockquote>
<blockquote>
<p>In other words, at Deploy Time we can decide if the same two classes communicate through simple function calls (in-process communication) or through an inter-process communication protocol (like <code>HTTP</code>), without recompiling nor changing anything else.</p>
</blockquote>
<p>I presented this last year at some conferences and I got very good feedback, so I think it worth a few posts.</p>
<p>A good example where this design brings an important advantage is a financial system. Here performance is critical and at the same time it deals with large loads (in data and transactions), so it needs to scale well. With this technique we can decompose the system in more micro-services. This assures a good scalability, each of the micro-service may be deployed more times on more servers. Then, after we get some metrics from running it in production, if we see that two services have a very intensive communication and the inter-process communication affects the performance, we may redeploy those two services on the same server, load them in the same process and have them communicate directly without any overhead. And, we can do all these without changing any code, without recompile.</p>
<p>To explain it, we can simplify everything to the communication between a <em>Client</em> and a <em>Service</em>. If they are hosted in the same process as shown below they communicate through function calls as we'd expect</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_577,h_250,c_pad,g_north/l_text:PT%20Sans_18:In-Process%20Communication,g_south,co_rgb:333333/client-service-in-proccess.jpg" alt="Client - Service - In-Process Communication" loading="lazy"></p>
<p>Then, if we deploy the same two classes on different servers (different processes), without changes or recompilation they will do an inter-process communication.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_577,h_250,c_pad,g_north/l_text:PT%20Sans_18:Inter-Process%20Communication,g_south,co_rgb:333333/client-service-inter-proccess.jpg" alt="Client - Service - Inter-process Communication" loading="lazy"></p>
<p>This may be useful in any Enterprise Architecture, because we could scale out, or on the contrary we could bring more components on the same box, without changing code. If we don't change code we have no risks of breaking things, so we don't need extensive regression testing. Even more, we don't need to bother the development team for this deployment optimization. This flexibility that we gain at Deployment Time, may be very cost effective.</p>
<p>The first time I've implemented this design was with a client that came after a migration from a Monolith Architecture like below (one big Windows Service for the backend on top of a database and one fat client),</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_186,h_280,c_pad,g_north/l_text:PT%20Sans_18:Monolith%20Architecture,g_south,co_rgb:333333/monolith-architecure.jpg" alt="Monolith Architecture" loading="lazy"></p>
<p>to a Distributed Architecture, which had more sub-systems that composed the backend, more databases, a Service Bus for orchestrating the communication and different clients offering the user interface. A modern architecture.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_750,h_350,c_pad,g_north/l_text:PT%20Sans_18:Distributed%20Architecture,g_south,co_rgb:333333/distributed-architecture.jpg" alt="Distributed Architecture" loading="lazy"></p>
<p>With this architecture they got many benefits in implementing scalability, availability, reliability and even better security. More important for them were the benefits in the maintainability and testability. They moved from one big thing to more and smaller things. Each of the resulted sub-system was now developed, changed and tested on its own, in isolation. They were able to reach one release per month from one release per year which they had with the monolith.</p>
<p>However, all these came with a major drawback: performance problems due to communication overhead. They had cases in which to satisfy a business flow, say <em>Place Order</em>, they had like a dozen service call hops in the backed to complete it and give the response to the user. The overhead with serialize the request, place it on the wire to send it to the service provider, which has to deserialize it, and then serialize the response and place it on the wire back to the client, was significant. And this had to be done for each call, within that business request. They split the monolith, but they were having performance problems due to the communication between the resulted pieces.</p>
<p>One could argue that the decomposition wasn't right. And probably that was the case, but in a large enterprise system it is very hard to get it right from the start. Every refactoring, migration and new feature has to be added gradually so you can continuously perform and deliver business value. So they couldn't redo the composition all at once. Another idea was to merge back some sub-systems, but they would loose the benefits they've gained in maintainability, testability and frequent releases and they were risking to go back to a monolith.</p>
<p>So in this context a design that takes the best from both cases was the solution.</p>
<p>We wanted to:</p>
<ul>
<li>continue to develop, test and maintain each sub-subsystem in isolation, as if it was hosted individually in its own process, and</li>
<li>be able to load more sub-subsystem in the same process and have them communicate through simple function calls, to</li>
<li>be able to think about the decomposition of the system regardless of the deployment and communication concerns (primary focus on volatility and sources of change, rather than communication when making the decomposition)</li>
<li>decide only at Deploy Time (configuration only), which sub-subsystem are loaded in the same process to communicate directly and which are loaded on different servers to scale and have inter-process communication between them</li>
</ul>
<p>There are three key design ideas to achieve all these:</p>
<ol>
<li><strong>Depend</strong> only <strong>on Contracts</strong>, which are expressed by abstract types (interfaces). This means that:</li>
</ol>
<ul>
<li>the contracts between the sub-systems are written as interfaces, DTOs and Exceptions only</li>
<li>they do not contain logic</li>
<li>they are the only types that have business knowledge and are shared (referenced by) all the sub-systems</li>
<li>there are no references (hard dependencies) among the sub-systems implementations nor binaries</li>
</ul>
<ol start="2">
<li>Use <strong>Proxies</strong> to forward the call to a contract to the actual implementation. The communication between a client and a service will be materialized through proxies. By convention:</li>
</ol>
<ul>
<li>if the implementation is available in the same process, a proxy that forwards the call in the same process will be used</li>
<li>if the implementation is not available in the same process, a proxy that can forward the call through a inter-process communication protocol (<code>HTTP</code>) will be used</li>
</ul>
<ol start="3">
<li>Use <strong>Type Discovery</strong> to determine what implementations were deployed on each process</li>
</ol>
<ul>
<li>at startup each process will discover (with reflection or other similar means) the implementations deployed and their dependencies</li>
<li>based on conventions it will configure the Dependency Injection container on what proxies to use as implementation for the contracts that are implemented by other sub-systems</li>
</ul>
<p>In the next posts we'll take a simple example with some classes that depend one on the other and we'll demo how this design can be implemented to achieve all the benefits outlined here. The demo will start from a high level overview and will go deep into code until we'll get to a runnable solution, in which just by copying binaries from one output folder to another we'll change the way communication happens.</p>
<p>To implement this I use the <a href="https://github.com/iquarc/appboot?ref=oncodedesign.com"><code>iQuarc.AppBoot</code></a>. It offers all the features needed to implement such a design in C#/.NET. I wrote about it in my <a href="https://oncodedesign.com/app-boot-library/">last post</a>. The demo will be in C#/.NET, but the same ideas could be applied in other technologies. You just need some form of Dependency Injection and Reflection.</p>
<h5 id="byattendingmycodedesigntrainingyouwilllearnhowtoimplementthisdesignideainyourcontexttomaximizethebenefitsforyourrequirements">By attending my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a> you will learn how to implement this design idea in your context, to maximize the benefits for your requirements</h5>
<h6 id="featuredimagecreditbluebayvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_bluebay?ref=oncodedesign.com">bluebay via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ AppBoot Library ]]>
            </title>
            <description>
                <![CDATA[ AppBoot is a generic .NET application bootstrapper, we at iQuarc, have put on GitHub a while ago.


It started few years back when we were about to begin developing a large enterprise application. Back then, we wanted to use the Unity Container for dependency injection, but we liked the way ]]>
            </description>
            <link>https://oncodedesign.com/blog/app-boot-library/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba3</guid>
            <category>
                <![CDATA[ Dependency Injection ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 10 Aug 2017 08:52:46 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/08/AppBoot-Library.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p><a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">AppBoot</a> is a generic .NET application bootstrapper, we at <a href="http://www.iquarc.com/?ref=oncodedesign.com">iQuarc</a>, have put on GitHub a while ago.</p>
<p>It started few years back when we were about to begin developing a large enterprise application. Back then, we wanted to use the <a href="https://msdn.microsoft.com/en-us/library/ff647202.aspx?ref=oncodedesign.com">Unity Container</a> for dependency injection, but we liked the way <a href="https://docs.microsoft.com/en-us/dotnet/framework/mef/?ref=oncodedesign.com">MEF</a> was declaring, with attributes, the interfaces and their implementation. The next day my colleague <a href="https://www.linkedin.com/in/cristianodea/?ref=oncodedesign.com">Cristian</a> put in some code which did exactly that: bring the MEF like attributes for configuring dependency injection to Unity. Since then, it got refined and we've started to see many other benefits from using the AppBoot.</p>
<p>Now, most of the projects we begin start with the AppBoot as the first piece of infrastructure.</p>
<h2 id="whatisiquarcappboot">What Is iQuarc.AppBoot</h2>
<p>A lightweight library that handles the startup of any .NET application. It also abstracts the composite application as concept.</p>
<p>AppBoot is a reusable implementation of the <em>Separate Configuration and Construction from Use</em> principle.</p>
<p>The two most important steps performed at application startup are:</p>
<ul>
<li>configure the Dependency Injection Container, and</li>
<li>initialize the Composite Application</li>
</ul>
<p>AppBoot is not a dependency injection container by itself. It uses one. It abstracts and hides the underlying container.</p>
<p>One of the most common questions I get when introducing it is:</p>
<blockquote>
<p>Why use the AppBoot and not the Dependency Injection framework directly?</p>
</blockquote>
<p>My answer is that by hiding the dependency injection framework from the rest of your code you gain <strong>consistency</strong>. You have consistency on how DI is done (only through constructor or only against interfaces, etc.) and you know that DI is used on all of your code. It is the same technique that I describe in my <em><a href="https://oncodedesign.com/enforce-consistency-with-assembly-references/">Enforce Consistency with Assembly References</a></em> post.</p>
<p>There are also other benefits that arise from using the AppBoot. I will detail the most relevant in the rest of the post. Some are like adding the missing features to the DI framework and having a central place where such features can be added, others are in the support it brings on applying and following design patterns and programming principles.</p>
<h2 id="whereisappbootavailable">Where Is AppBoot Available</h2>
<p>You can find all the source code on <strong>GitHub</strong> <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">here</a>. If you go through the <code>read.me</code> file you will get the basics on how to use it. You'll see examples for the <em>Annotation Based Configuration</em>, for the <em>Convention Based Configuration</em> for the <em>Instances Lifetime</em> and for the <em>Composite Application</em> support. I won't repeat them here.</p>
<p>The AppBoot design is extensible. You can bring in the Dependency Injection framework that you like, by implementing the <code>IDependencyContainer</code> interface with an adapter for it. You can also implement new conventions on how to configure the dependency injection by adding more implementation of <code>IRegistrationBehavior</code> interface.</p>
<p>It is also available as <strong>NuGet</strong> Packages:</p>
<ul>
<li><a href="https://www.nuget.org/packages/iQuarc.AppBoot/?ref=oncodedesign.com">iQuarc.AppBoot</a> - the core library, which does not depend on any type of .NET application, nor on a specific dependency injection framework</li>
<li><a href="https://www.nuget.org/packages/iQuarc.AppBoot.Unity/?ref=oncodedesign.com">iQuarc.AppBoot.Unity</a> - the adapters to use it with Unity Container</li>
<li><a href="https://www.nuget.org/packages/iQuarc.AppBoot.WebApi/?ref=oncodedesign.com">iQuarc.AppBoot.WebApi</a> - the helpers to configure it for an ASP.NET Web API project</li>
</ul>
<h2 id="whatappbootprovides">What AppBoot Provides</h2>
<p>Even if the AppBoot is a lightweight library, and it has a very simple code, when combined with some principles it may be very powerful in some scenarios. It supports the implementation of some good design principles like <em>Separation of Concerns</em>, <em>Modularity</em> and <em>Loose Coupled Implementations</em>. Its most valuable benefits detailed below, are very much connected and derive one from the other.</p>
<h3 id="dependenciesdiscovery">Dependencies Discovery</h3>
<p>Instead of explicit registration of the pairs between <em>Service Contract</em> (<em>interface</em>) and <em>Service Implementation</em> (<em>class</em>), AppBoot promotes a way to discover them at application startup.</p>
<p>These pairs can be specified in a declarative manner, with attributes like:</p>
<pre><code class="language-language-csharp">[Service(typeof (IPriceCalculator), Lifetime.AlwaysNew)]
public class PriceCalculator : IPriceCalculator
{
	...
}
</code></pre>
<p>The <code>ServiceAttribute</code> says that the class it decorates, implements the interface it has in the constructor. Above, <code>PriceCalculator</code> class implements the <code>IPriceCalculator</code> interface.</p>
<p>It also specifies that the lifetime is <code>AlwaysNew</code>, meaning that a new instance is created each time when it is injected as a dependency to another implementation. At the other end is the <code>Lifetime.Application</code> which is equivalent to <em>Singleton</em>.</p>
<p>By making this kind of declaration on the implementation class, not only that we get rid of the long registration config files, but the lifetime specification is very close to the implementation. This may help in avoiding implementation mistakes. For example, if we say that this implementation is <code>Lifetime.Application</code> (<em>Singleton</em>) and it is statefull we'd better synchronize it in a multi-thread environment.</p>
<p>We could also specify such a pair through conventions, like:</p>
<pre><code class="language-language-csharp">    conventions.ForTypesDerivedFrom&lt;Repository&gt;()
			.ExportInterfaces(i =&gt; i.Name.EndsWith(&quot;Repository&quot;))
			.WithLifetime(Lifetime.AlwaysNew);
</code></pre>
<p>This says that all the classes that inherit the base <code>Repository</code> class should be registered as implementations of the specific interface they implement. Like: for <code>PersonsRepository : Repository, IPersonsRepository</code> the (<code>IPersonsRepository</code>, <code>PersonsRepository</code>) pair should be registered.</p>
<p>At application startup, on <code>Bootstrapper.Run()</code>, AppBoot will scan with reflection all the types from all the assemblies that build our application, will look after the types that have the <code>ServiceAttribute</code> or match a convention and will make the registrations into the DI Container.</p>
<h3 id="modularapplication">Modular Application</h3>
<p>AppBoot defines an application as being composed by a set of modules. The modules are defined by a simple interface:</p>
<pre><code class="language-language-csharp">public interface IModule
{
	void Initialize();
}
</code></pre>
<p>We can create any number of modules as implementations of the <code>IModule</code> interface and declare them as such. For example:</p>
<pre><code class="language-language-csharp">[Service(nameof(MyModule), typeof(IModule))]
public class MyModule : IModule
{
	public void Initialize()
	{
		// some configuration code that runs at app startup
	}
}
</code></pre>
<p>At application startup, on <code>Bootstrapper.Run()</code>, AppBoot will find all the modules and for each of them will execute their <code>Initialize()</code> method. Therefore, if we have some (configuration) code that needs to run on the application startup, the AppBoot defines where that code should be and it takes care of running it.</p>
<h3 id="separationbetweenconfigurationandconstructionfromuse">Separation between Configuration and Construction from Use</h3>
<p>AppBoot delegates everything related to constructing object instances to the underneath Dependency Injection framework. By hiding it from the rest of the code it assures a separation between its configuration and the rest of the code.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/07/image-4.png" alt="" loading="lazy"></p>
<p>For example, the application code cannot call functions of the Unity Container to do additional registrations mixed with business logic, because it doesn't reference it. It can only make declarations of interface - implementation pairs, in a declarative manner through the AppBoot.</p>
<p>Moreover, with the modules support the AppBoot brings, we can isolate our application code from the .NET host process. This means that the application could be hosted by a <em>ConsoleApp</em> or by a <em>WindowsService</em>, or by some other kind of .NET process because it does not depend on it.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/07/Separate-HostProcess-1.png" alt="" loading="lazy"></p>
<p>The dependency has been inverted by the <code>IModule</code> abstraction, and all the configuration code that should execute on the <code>main()</code> function of the host process will be executed as part of <code>Bootstrapper.Run() -&gt; IModule.Initialize()</code>.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/08/Separate-HostProcess-2.png" alt="" loading="lazy"></p>
<p>With these two pieces: <em>Dependencies Discovery</em> and <em>Modular Application</em> support the AppBoot completes the <em>Separation between Configuration and Construction from Use</em> principle implementation. After the <code>Bootstrapper.Run()</code> the application is ready to use: the DI can construct instances and the configuration has been made.</p>
<h3 id="separatesthecontractsfromimplementations">Separates the Contracts from Implementations</h3>
<p>The <em>Dependencies Discovery</em> the AppBoot does, opens many design possibilities and is a powerful technique in some scenarios. For example we could separate the contracts from their implementations into different assemblies. At deploy time, we could decide which implementations we want on each deployment, and then the AppBoot will do the registrations depending on what types it finds with reflection.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/07/Separate-Contracts.png" alt="" loading="lazy"></p>
<p>Even more we could have more functional modules isolated in different assemblies. We can make these functional modules assemblies to have no references among them (as the above diagram shows). Even if they depend one on the other functionally, they don't need to have direct dependencies one on the other, and this could be enforced by not allowing references among them. So we'll not have direct dependencies between the implementations, they will only depend through abstractions which are the contracts. (The above diagram shows that the two <em>Application Module</em> assembly reference the <em>Contracts</em> assembly and the <em>AppBoot</em> assembly, but they don't reference each other).</p>
<p>This leads to a loosely coupled system and opens the possibility to change the behavior by having different deployment configurations.</p>
<p>This approach puts a lot of power and responsibility at deploy time. We could make some decisions that will influence the system behavior only at deploy time, without changing any code or any configurations. I will show in some future posts how we can do this, to change if some services communicate in the same process through simple function calls or inter processes through a REST API, just by doing a different deployment, without changing any code.</p>
<h3 id="facilitatesparalleldevelopmentwithmultipleteams">Facilitates Parallel Development with Multiple Teams</h3>
<p>With these separations we could also benefit in organizing parallel development with different teams for very large projects.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/08/AppBoot-parallel-development.png" alt="" loading="lazy"></p>
<p>If we have several modules that are developed by team A and other modules developed by team B, we could structure them in different assemblies with no references between each team's assemblies. They could only depend through the <em>Contracts</em> assembly, as the above diagram shows.</p>
<p>Once the contracts have been decided and written into the <em>Contracts</em>, each team can work on their implementation without the need to even load in Visual Studio the projects belonging to the other team. They just need in the <code>\bin\Debug\</code> the compiled assemblies made by the other team, or they could use some fake implementations.</p>
<p>This may bring important flexibility in planning and monitoring the work on very large projects.</p>
<h3 id="promotesconstructordependencyinjection">Promotes Constructor Dependency Injection</h3>
<p>By hiding the dependency injection framework, the AppBoot controls how DI is done. In the GitHub implementation, DI is limited to constructor. Beside having consistency, this makes that the dependencies of a class are visible into the constructor. This means that we can easily:</p>
<ul>
<li>see classes with more than 5±2 dependencies and be critical about them</li>
<li>easily stub or mock external dependencies in tests</li>
<li>prevent circular dependencies (when all the classes use constructor dependency injection, most likely an error will occur if circular dependencies exist. Depending on the DI framework this may be further enforced with extensions to AppBoot)</li>
</ul>
<p>Moreover, we could add additional constraints in the AppBoot to allow only dependencies to interfaces, promoting with this <em>programming against interfaces</em> practice.</p>
<h3 id="createspatternsinthecode">Creates Patterns in the Code</h3>
<p>AppBoot also encourages patterns in the way code is written in a project.</p>
<p>For example it inherits from the Unity Container the notion of having a <em>default implementation</em> for an interface and also more <em>named implementations</em> for the same interface.</p>
<p>This means that for the interface:</p>
<pre><code class="language-language-csharp">interface IApprovalService
{
    bool Approve(ApproveRequest approveRequest);
}
</code></pre>
<p>we could have more implementations, which we declare by specifying a <em>name</em> with the <code>ServiceAttribute</code>, like:</p>
<pre><code class="language-language-csharp">[Service(nameof(BannedCustomer), typeof(IApprovalService))]
class BannedCustomer : IApprovalService
{
    public bool Approve(ApproveRequest approveRequest)
    {
        if (IsBanned(approveRequest.Customer))
            return false;
        return true;
    }

    private bool IsBanned(Customer customer)
    {
        // maybe check this in the DB or in a cache
    }
}

[Service(nameof(PriceForCustomer), typeof(IApprovalService))]
class PriceForCustomer : IApprovalService
{
    public bool Approve(ApproveRequest approveRequest)
    {
        // check if the order price is to high for the trust we have in this customer
        return true;
    }
}
</code></pre>
<p>At the same time we could also have the <em>default implementation</em> of the same interface:</p>
<pre><code class="language-language-csharp">[Service(typeof(IApprovalService))]
class ApprovalService : IApprovalService
{
        ...
}

</code></pre>
<p>When some other class asks in the constructor (depends on) an array of <code>IApprovalService[]</code> it will get all the <em>named implementations</em> except the <em>default implementation</em>. At the same time, when another class asks (depends on) the <code>IApprovalService</code> it will receive only the <em>default implementation</em>. The name used to declare <em>named implementations</em> is not used by the AppBoot. It just needs to be unique. No one could ask for a specific <em>named implementation</em>, it is not meant for that.</p>
<p>With this in mind, we could make the <em>default implementation</em> of the <code>IApprovalService</code> as a composite of the <em>named implementations</em>, like:</p>
<pre><code class="language-language-csharp">[Service(typeof(IApprovalService))]
class CompositeApprovalService : IApprovalService
{
    private readonly IApprovalService[] approvals;

    public CompositeApprovalService(IApprovalService[] approvals)
    {
        this.approvals = approvals;
    }

    public bool Approve(ApproveRequest approveRequest)
    {
        foreach (var  approval in approvals)
        {
            bool isOk = approval.Approve(approveRequest);
            if (!isOk)
                return false;
        }
        return true;
    }
}
</code></pre>
<p>Now, the client code which only needs an <code>IApprovalService</code> doesn't need to know that the implementation it gets (the <em>default implementation</em>) is the <code>CompositeApprovalService</code> which depends in its turn on the <em>named implementations</em> of the same interface. Moreover, it doesn't need to know that <em>named implementations</em> of the <code>IApprovalService</code> even exist.</p>
<p>The result resembles with the <em>Chain of Responsibility</em> design pattern. Separating the <code>IApprovalService</code> implementation into more classes each of them dealing with another concern is a far better design than having them all in a class like:</p>
<pre><code class="language-language-csharp">[Service(typeof(IApprovalService))]
class ApprovalService : IApprovalService
{
    public bool Approve(ApproveRequest approveRequest)
    {
        if (IsBanned(approveRequest.Customer))
        {
            // deal with banned customer
        }
        else if (IsOverPrice(approveRequest))
        {
            // deal over priced requests
        }
        else // add other cases 
    }

    private bool IsBanned(Customer approveRequestCustomer)
    {
        // maybe check this in the DB or in a cache
    }

    private bool IsOverPrice(ApproveRequest approveRequest)
    {
        // check if the price for this order is over the limit for the customer
    }
}
</code></pre>
</br>
<h3 id="idisposablesupport"><code>IDisposable</code> Support</h3>
<p>When using <em>Dependency Injection</em> (<em>Inversion of Control</em> at creating object instances), there is the question on who is going to call <code>Dispose()</code> on the <code>IDisposable</code> objects that the DI framework has created, and when. The answer is even more tricky when the interface that the client code depends on (like <code>IApprovalService</code> from above) is not inheriting the <code>IDisposable</code>, but only one of its implementations is <code>IDisposable</code>. So the client code doesn't even know that it got injected an <code>IDisposable</code> instance.</p>
<p>One valid expectation is that the DI framework should dispose them, when the operation (a web request handling for example) within which they were created, ends.</p>
<p>Not many DI frameworks offer a good support for this. AppBoot adds this support. It offers a way to define an operation scope, it creates a child DI container for that scope, and it disposes all the <code>IDisposable</code> instances which were created within that scope, when the operation ends.</p>
<p>In an older post <a href="https://oncodedesign.com/disposing-instances-when-using-inversion-of-control/"><em>Disposing Instances when Using Inversion of Control</em></a> I detail the problem, the possible solutions and how this is implemented in AppBoot. The post is part of a <a href="https://oncodedesign.com/disposable-instances-series/">series</a> on dealing with disposable instances.</p>
<h2 id="whatisnext">What Is Next?</h2>
<p>The AppBoot is a mature library. We're using it for a while now, in production, in various projects. However, there are still a few things that could be added.</p>
<p>One is to have a version for .NET Core which also works with the new DI from ASP.NET Core.</p>
<p>Another thing would be to add support for <em>Type Containers</em>. At startup, AppBoot scans anyhow all the types from all the assemblies that form the application. Therefore, we could define custom attributes to decorate classes with them, and have the AppBoot initialize containers with types based on the metadata from these attributes. This kind of feature would encourage declarative programming, which would improve <em>Separation of Concerns</em> and reduce the complexity.</p>
<p>Other features would be to add support for more design patterns like <em>Factory</em>, <em>Decorator</em>, etc. These could also lead towards a better design of the client app.</p>
<h5 id="mycodedesigntraininggoesindepthonhowtousetheappbootandhowtobuildtheapplicationinfrastructure">My <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a> goes in depth on how to use the AppBoot and how to build the application infrastructure</h5>
<h6 id="featuredimagecreditjura13via123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_jura13?ref=oncodedesign.com">jura13 via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Row Level Authorization with Entity Framework ]]>
            </title>
            <description>
                <![CDATA[ A few posts back I wrote about the benefits of having a well encapsulated data access implementation. One of the benefits outlined there was the advantage it may bring when we need to implement row-level authorization at the data access level. In this post, I will detail this implementation on ]]>
            </description>
            <link>https://oncodedesign.com/blog/row-level-authorization/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba1</guid>
            <category>
                <![CDATA[ data access ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 25 Apr 2017 08:47:33 +0300</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1492955249/row-level-authorization.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>A few <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/">posts back</a> I wrote about the benefits of having a well encapsulated data access implementation. One of the benefits outlined <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/#authorizationondatarecords">there</a> was the advantage it may bring when we need to implement row-level authorization at the data access level. In this post, I will detail this implementation on top of the <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc Data Access</a> library.</p>
<p>By row-level authorization, we mean that we want to restrict the access to the rows of one or more entities based on the rights or the role of the current user. For example we want to ensure that users can only access those data rows that belong to their department, or we want to restrict a manager to see only the data rows related to the projects she manages.</p>
<p>To implement this at the data access level, it means that we should build a filter based on the current user roles (or claims) and apply it to each query that is issued on the entities for which rows we should restrict the access. If we have a well encapsulated data access it means that we can use it as the central place through which all the queries go and we could extend it to intercept and append the filter to the queries. This assures us a consistent implementation for row-level authorization. Otherwise, we would need to go through all the controllers or services that send queries and append the filter &quot;by hand&quot;, which is error prone.</p>
<p>Lets take a more specific example. Say we have the following entities:</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/04/RowLevelAuthorization-1.png" alt="" loading="lazy"><br>
,  and we want to restrict the access to the <em>Order</em> rows such that the current user, if she is an account manager, to see only the orders of the customers that are in her area. For this case, we need to append the <code>order.Customer.AreaID == currentUserAreaID</code> filter to all the queries that go on the <code>Order</code> entity.</p>
<p>To get this done in a consistent manner and for all the entities where we want to have row-level authorization, we need to have a generic approach. Here, a well encapsulated data access implementation, which is Linq based (like <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc.DataAccess</a> is) plays an important role. We can build a Lambda Expression for the authorization filter and add a <code>.Where()</code> with it on the <code>IQueryable&lt;T&gt;</code> which we'll pass to the caller. Having it implemented on top of Entity Framework (EF), we can add the <code>.Where()</code> on to the <code>DbContext.DbSet&lt;T&gt;</code> property, like this:</p>
<pre><code class="language-language-csharp">public class Repository : IRepository
{
     public IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class
     {
         int currentUserValue = GetFilterValueFromCurrentUser();
         Expression&lt;Func&lt;T, bool&gt;&gt; authFilter = BuildWhereExpression&lt;T&gt;(currentUserValue);

         return Context.Set&lt;T&gt;()
                  .Where(authFilter)
                  .AsNoTracking();
     }
...
}
</code></pre>
<p>This is very much similar with what we did in the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part3/#isolatetenantdatathroughdataaccess">previous post</a>, where we built a filter for filtering tenant specific data based on the <code>.TenanatID</code> property. The difference is that here, we cannot make all the entities for which we need row-level authorization to implement an interface like <code>ITenantEntity</code> was, and to have a <code>TenantID</code> property to use in the filter. This is a more generic case than the multitenacy example from the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part3/#isolatetenantdatathroughdataaccess">previous post</a>. Here for the <code>Order</code> we need <code>order.Customer.AreadID</code>, but for the <code>Customer</code> we need <code>customer.AreadID</code>, and maybe for other entities we want to filter based on something else like <code>OrganizationID</code> or <code>CountryCode</code>.</p>
<p>Therefore, the <code>BuildWhereExpression&lt;T&gt;()</code> function is more complicated here. For each entity it would need to know the two operands from the filter expression:</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/03/filter-operands-1.PNG" alt="" loading="lazy"></p>
<p>For each entity we can specify these as row-level authorization policies and register them in a container that will be used from the repository. Such a policy should have a Lambda Expression which can select, starting from the current entity type, the property to be used in the authorization filter, and a function that can get the filer value from the current user. In a simplified form, the class that represent such a policy may look like this:</p>
<pre><code class="language-language-csharp">class RowAuthPolicy&lt;TEntity, TProperty&gt;
{
    public RowAuthPolicy(Expression&lt;Func&lt;TEntity, TProperty&gt;&gt; selector, IRowAuthPoliciesContainer parent)
    {
        Selector = selector;
        FilterValueGetter = () =&gt; default(TProperty);
        EntityType = typeof(TEntity);
    }

    public Expression&lt;Func&lt;TEntity, TProperty&gt;&gt; Selector { get; private set; }
    public Func&lt;TProperty&gt; FilterValueGetter { get; private set; }
    public Type EntityType { get; private set; }
}
</code></pre>
<p>The class is generic by the type of the entity to which it applies and by the type of the property that will be used in the filter. The property type and the filter value type must match.</p>
<p>Now, we can create a <code>RowAuthPoliciesContainer</code> with a fluent API that allows a nice and easy way to specify these policies for the entities we want row-level authorization. We'd like something like this:</p>
<pre><code class="language-language-csharp">public static IRowAuthPoliciesContainer ConfigureRowAuthPolicies()
{
    return new RowAuthPoliciesContainer()
        .Register&lt;Order, int&gt;(o =&gt; o.Customer.AreadID).Match(CurrentUserAreaId)
        .Register&lt;Customer, int&gt;(c =&gt; c.AreadID).Match(CurrentUserAreaId)
        .Register&lt;SalesArea, string&gt;(sa =&gt; sa.CountryCode).Match(CurrentUserCountryCode);
}
</code></pre>
<p>The <code>Register()</code> function receives the Lambda Expression that will be used to select the property to filter by, and then by calling the <code>Match()</code> function we can pass the function which will get the filter value from the current user. The <code>CurrentUserAreaId()</code> is just a static helper function. It is quite simple, something like this:</p>
<pre><code class="language-language-csharp">private static int CurrentUserAreaId()
{
    const string areaKeyClaim = &quot;area_key&quot;;
    Claim areaClaim = ClaimsPrincipal.Current.FindFirst(areaKeyClaim);
    int areaId = ClaimsValuesCache.GetArea(areaClaim.Value);
    return areaId;
}
</code></pre>
<p>we may have more such helper functions, which we may use in more policies.</p>
<p>The <code>RowAuthPoliciesContainer</code> class will create the policy classes when <code>Register()</code> is called and will store them in a dictionary:</p>
<pre><code class="language-language-csharp">class RowAuthPoliciesContainer : IRowAuthPoliciesContainer

   readonly Dictionary&lt;Type, object&gt; policies = new Dictionary&lt;Type, object&gt;();

   public RowAuthPolicy&lt;TEntity, TProperty&gt; Register&lt;TEntity, TProperty&gt;(Expression&lt;Func&lt;TEntity, TProperty&gt;&gt; selector)
   {
       var policy = new RowAuthPolicy&lt;TEntity, TProperty&gt;(selector, this);
       policies.Add(policy.EntityType, policy);
       return policy;
   }

   public IRowAuthPolicy&lt;TEntity&gt; GetPolicy&lt;TEntity&gt;()
   {
       return (IRowAuthPolicy&lt;TEntity&gt;) policies[typeof(TEntity)];
   }

   public bool HasPolicy&lt;TEntity&gt;()
   {
       return policies.ContainsKey(typeof(TEntity));
   }
...
}
</code></pre>
<p>Now, lets go back to the <code>Repository</code> and its <code>BuildWhereExpression&lt;T&gt;()</code> function. One first thing to notice is that the filter value is now part of the row-level authentication policy, so it shouldn't be passed through the <code>currentUserValue</code> parameter to the function as we've written the code in the beginning. It may be of different types, not only <code>int</code>, but also <code>string</code> or others, like we have in the <em>CountryCode</em> policy in the above example. Moreover, the <code>Repository.BuildWhereExpression&lt;T&gt;()</code> can only be generic by the type of the entity, which means that it does not know the type of the property, and therefore it wouldn't be able to build the Lambda Expression. To fix all these we can delegate the building of the Lambda Expression to the policy class. So by refactoring it, we'll have:</p>
<pre><code class="language-language-csharp">public class Repository : IRepository
{
    private IRowAuthPoliciesContainer container;

    public IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class
    {
         Expression&lt;Func&lt;T, bool&gt;&gt; authFilter = BuildWhereExpression&lt;T&gt;(currentUserValue);
         return Context.Set&lt;T&gt;()
                  .Where(authFilter)
                  .AsNoTracking();
    }

    private Expression&lt;Func&lt;T, bool&gt;&gt; BuildWhereExpression&lt;T&gt;()
    {
        if (container.HasPolicy&lt;T&gt;())
        {
            IRowAuthPolicy&lt;T&gt; policy = container.GetPolicy&lt;T&gt;();
            return policy.BuildAuthFilterExpression();
        }
        else
        {
            Expression&lt;Func&lt;T, bool&gt;&gt; trueExpression = entity =&gt; true;
            return trueExpression;
        }
    }
...
}
</code></pre>
<p>The <code>RowAuthPolicy&lt;TEntity, TProperty&gt;</code> class also gets refactored to hide the <code>Selector</code> and the <code>FilterValueGetter</code> and to use them internally to build the Lambda Expression for the authentication filter:</p>
<pre><code class="language-language-csharp">class RowAuthPolicy&lt;TEntity, TProperty&gt; : IRowAuthPolicy&lt;TEntity&gt;
{
    private readonly Expression&lt;Func&lt;TEntity, TProperty&gt;&gt; selector;
    private Func&lt;TProperty&gt; filterValueGetter;
    private readonly IRowAuthPoliciesContainer parent;
    
    public RowAuthPolicy(Expression&lt;Func&lt;TEntity, TProperty&gt;&gt; selector, IRowAuthPoliciesContainer parent)
    {
        this.selector = selector;
        this.parent = parent;
        this.filterValueGetter = () =&gt; default(TProperty);
        EntityType = typeof(TEntity);
    }

    public Type EntityType { get; private set; }

    public Expression&lt;Func&lt;TEntity, bool&gt;&gt; BuildAuthFilterExpression()
    {
        TProperty value = filterValueGetter.Invoke();
        Expression&lt;Func&lt;TProperty&gt;&gt; filterValueParam = () =&gt; value;

        var filterExpression = Expression.Lambda&lt;Func&lt;TEntity, bool&gt;&gt;(
            Expression.MakeBinary(ExpressionType.Equal,
                Expression.Convert(selector.Body, typeof(TProperty)),
                filterValueParam.Body),
            selector.Parameters);

        return filterExpression;
    }

    public IRowAuthPoliciesContainer Match(Func&lt;TProperty&gt; filterValueGetFunc)
    {
        this.filterValueGetter = filterValueGetFunc;
        return parent;
    }
...
}
</code></pre>
<p>And with this we have completed the implementation. We now have row-level authentication consistently implemented for any entity we need. We added it only by extending the data access, so we could add it with a minimum effort even at a later stage of the project if we have a well encapsulated data access implementation, which assures that all the queries and commands go in a consistent manner through it. Now, with the policies we have defined, all the screens that show <em>Customers</em>, <em>Orders</em> or <em>SalesAreas</em> will automatically be filtered based on the access rights of the current user. If later we want that all the screens that show <em>Products</em> to also be filtered we just add a new registration in the policy container:</p>
<pre><code class="language-language-csharp">    .Register&lt;Product, int&gt;(p =&gt; p.Producer.AreaID).Match(CurrentUserAreaId)
</code></pre>
<p>You can find the entire source code of this sample in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a> GitHub repository <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/LessonsSamples/LessonsSamples/Lesson8/RowLevelAuth?ref=oncodedesign.com">here</a>.</p>
<hr>
<p>This can be further extended:</p>
<ul>
<li>You may have more policies for one entity and use them based on some context or combine them in different ways. You can also add a <code>.When()</code> function to the <code>RowAuthPolicy</code> class to specify that the rule should only apply on a condition. For example the below registration says that the rule should only apply if the user is a sales person:</li>
</ul>
<pre><code class="language-language-csharp">  .Register&lt;SalesArea, string&gt;(sa =&gt; sa.CountryCode).When(CurrentUserIsSales).Match(CurrentUserCountryCode);
</code></pre>
</br>
<ul>
<li>Another thing to consider is if you have use cases when the related entities collections should also get filtered by this mechanism. This sample implementation only takes care of filtering the root queries. If you eager load the related entities some more work is needed. For example if you have the scenario with <em>OrderLines</em> which have <em>Products</em> that belong to an area which makes them not accessible by the current user, a rule like:</li>
</ul>
<pre><code class="language-language-csharp">   .Register&lt;OrderLine, int&gt;(ol =&gt; ol.Product.Producer.AreaID).Match(CurrentUserAreaId)
</code></pre>
<p>will get applied when the <em>OrderLines</em> are loaded like:</p>
<pre><code class="language-language-csharp">var q = repository.GetEntities&lt;OrderLine&gt;()
     .Where(ol =&gt; ol.OrderID == orderId);
 return q.ToArray();
</code></pre>
<p>However, it will not be applied when they are loaded through the related entity collection like:</p>
<pre><code class="language-language-csharp">var c = repository.GetEntities&lt;Order&gt;()
        .Include(o =&gt; o.OrderLines)
    .Where(o =&gt; o.ID == orderId);
</code></pre>
<p>or like:</p>
<pre><code class="language-language-csharp">var q = repository.GetEntities&lt;Order&gt;()
    .Select(o =&gt; new
    {
        o.ID,
        o.OrderLines,
        o.OrderDate
    })
    .Where(o =&gt; o.ID == orderId);
</code></pre>
<p>For both these examples you need to parse the Lambda Expression and to rewrite it to append the authorization filter.</p>
<p>The <code>.Include()</code> is specific to EF, so you will need to make an EF specific parser. EF does not provide a way to filter the rows loaded with <code>Include()</code>, so you could use the <a href="https://github.com/zzzprojects/EntityFramework-Plus?ref=oncodedesign.com">EntityFramework-Plus</a> 3rd party project that provides an <code>.IncludeFilter()</code> operator.</p>
<p>For the second example, you would need to rewrite the query into something like:</p>
<pre><code class="language-language-csharp">var q = repository.GetEntities&lt;Order&gt;()
    .Select(o =&gt; new
    {
        o.ID,
        OrderLines = o.OrderLines.Where(ol=&gt;ol.Product.Producer.AreaID == areaId),
        o.OrderDate
    })
    .Where(o =&gt; o.ID == orderId);
</code></pre>
<p>This will filter the related entity collection when it is loaded and projected into the anonymous type.</p>
<h5 id="morediscussionsondataaccesssecurityarepartofmycodedesigntraining">More discussions on data access security are part of my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="featuredimagecreditmaxkabakovvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_maxkabakov?ref=oncodedesign.com">maxkabakov via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Data Isolation and Sharing in a Multitenant System - Part 3 ]]>
            </title>
            <description>
                <![CDATA[ This is the last part of the &quot;Data Isolation and Sharing in a Multitenant System&quot; article, which is also a continuation of my post that outlines the benefits of a well encapsulated data access.


In this post, we&#39;ll look in detail at the implementation of the ]]>
            </description>
            <link>https://oncodedesign.com/blog/data-isolation-and-sharing-in-multitenant-system-part3/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba0</guid>
            <category>
                <![CDATA[ multitenancy ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 02 Mar 2017 08:52:14 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1488123014/shared-database-multitenancy.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>This is the last part of the &quot;Data Isolation and Sharing in a Multitenant System&quot; article, which is also a continuation of my <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/">post</a> that outlines the benefits of a well encapsulated data access.</p>
<p>In this post, we'll look in detail at the implementation of the <em>Shared Database</em> strategy, explained in the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part1/">first part</a>. We'll see how to <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part3/#refactorthedatabaseschema">refactor the database schema</a> for multitenancy and then how to <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part3/#isolatetenantdatathroughdataaccess">build a Lambda Expression</a> to filter the <em>tenant specific data</em>.</p>
<p>With this strategy we'll have one database for all the tenants, which holds both <em>tenant shared data</em> and <em>tenant specific data</em>:</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_577,h_625,c_pad,g_north/l_text:PT%20Sans_20:Multitenancy%20-%20Shared%20Database,g_south,co_rgb:333333/multitenancy-shared-database.png" alt="Multitenancy - Shared Database" loading="lazy"></p>
<p>This means that all the tables that hold <em>tenant specific data</em> will have the <code>TenantID</code> as the discriminant column. Here is a small view on the database diagram for our example, where with blue are the tenant specific tables.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_578,h_761,c_pad,g_north/l_text:PT%20Sans_20:Physio%20Database%20Diagram,g_south,co_rgb:333333/physio-db-diagram.png" alt="Multitenancy - Database Diagram" loading="lazy"></p>
<p>If we're starting the project with multitenacy in mind, we'll add the <code>TenantID</code> from the start, so we can skip the next section and go directly at the data access. Otherwise, if we are in the case where we add multitenancy at a later stage of the project, we need to refactor the database to add the <code>TenantID</code> column to the tenant specific tables.</p>
<h2 id="refactorthedatabaseschema">Refactor the Database Schema</h2>
<p>Doing this is not as easy as it sounds, especially if we have data which we want to preserve.</p>
<p>Basically, we need to create the <code>Tenants</code> table and to insert in it, a row for the first tenant. (We assume that all existent data belongs to this <em>First Tenant</em>).</p>
<pre><code class="language-language-sql">GO
BEGIN TRAN

CREATE TABLE [dbo].[Tenants] (
    [ID]   INT           IDENTITY (1, 1) NOT NULL,
    [Name] NVARCHAR (50) NOT NULL,
    [Key]  NVARCHAR (50) NOT NULL,
    CONSTRAINT [PK_Tenants] PRIMARY KEY CLUSTERED ([ID] ASC)
);

CREATE UNIQUE NONCLUSTERED INDEX [UK_Tenants_Key]
    ON [dbo].[Tenants]([Key] ASC);

SET IDENTITY_INSERT [dbo].[Tenants] ON
INSERT INTO [dbo].[Tenants] ([ID], [Name], [Key]) 
	VALUES (1, 'First Tenant', 'FirstTenant')
SET IDENTITY_INSERT [dbo].[Tenants] OFF

COMMIT TRAN
</code></pre>
<p>Then, we should alter each table with <em>tenant specific data</em> to add the <code>TenantID</code>, plus a FK to the <code>Tenants</code> table. Here, existent data complicates things. We should first add the <code>TenantID</code> as nullable, then update all the rows with <code>SET TenantID = 1</code> and then alter again the table to make the the <code>TenantID</code> not nullable. Here is a simplified version of the script for the <code>Patients</code> table:</p>
<pre><code class="language-language-sql">GO
BEGIN TRAN

ALTER TABLE [dbo].[Patients] 
	ADD [TenantID] INT NULL

ALTER TABLE [dbo].[Patients] WITH CHECK
    ADD CONSTRAINT [FK_Patients_Tenants] FOREIGN KEY ([TenantID]) REFERENCES [dbo].[Tenants] ([ID])

GO
UPDATE Patients SET [TenantID] = 1 WHERE [TenantID] IS NULL

ALTER TABLE [dbo].[Patients] 
	ALTER COLUMN [TenantID] INT NOT NULL

COMMIT TRAN
</code></pre>
<p>Based on this, we can create a small tool (it may be just another T-SQL script) which generates such scripts for all the tables that keep <em>tenant specific data</em>.</p>
<p>Another option to do the database schema refactor, is to use an existent schema compare tool. Such a tool may be the <a href="https://msdn.microsoft.com/en-us/library/hh272686(v=vs.103).aspx?ref=oncodedesign.com">SQL Server Data Tools</a> (aka Visual Studio Database Project).</p>
<p>Here are the steps we should do with it (you can follow the code changes for all these steps on my <a href="https://github.com/onCodeDesign/Code-Design-Training?ref=oncodedesign.com">Code-Design-Training GitHub repository</a>. For each step I've made a tag, so you can easily follow the progress):</p>
<ol>
<li>Create a database project with the current schema of the database (the one without multitenancy things) [<em>Tag on GitHub:</em> <a href="https://github.com/iQuarc/Code-Design-Training/tree/mt-shared-step1/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com"><code>mt-shared-step1</code></a>]</li>
<li>Edit the schema in the tool, to add the <code>Tenants</code> table, the <code>TenantID</code> column and the foreign keys [<em>Tag on GitHub:</em> <a href="https://github.com/iQuarc/Code-Design-Training/tree/mt-shared-step2/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com"><code>mt-shared-step2</code></a>]</li>
</ol>
<ul>
<li>we can do this in Visual Studio, or we could publish the schema in a temporary database (which has no data), do the changes there (with SQL Management Studio) and then update back the schema from the Database project</li>
</ul>
<ol start="3">
<li>Generate the <em>Publish Script</em> from the Database project to our original database [<em>Tag on GitHub:</em> <a href="https://github.com/iQuarc/Code-Design-Training/tree/mt-shared-step3/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com"><code>mt-shared-step3</code></a>]</li>
</ol>
<ul>
<li>this should be a script that prevents data loss and assures a safe way to refactor the database</li>
</ul>
<ol start="4">
<li>Edit the generated  <em>Publish Script</em> to insert the row for the <em>First Tenant</em>, to set the FKs to it, and to make the FKs not nullable [<em>Tag on GitHub:</em> <a href="https://github.com/iQuarc/Code-Design-Training/tree/mt-shared-step4/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com"><code>mt-shared-step4</code></a>]</li>
</ol>
<ul>
<li>for each table, the generated script creates temporary table with the new schema (which contains the <code>TenantID</code> column) and copies the existent data to it. Then it drops the original table and renames the temporary one as the original. So, we do the following:
<ul>
<li>search for all <code>CREATE TABLE</code> statements and make the <code>TenantID NOT NULL</code></li>
<li>add in all <code>INSERT INTO ... SELECT (..)</code> statements the <code>TenantID</code> and its PK value <code>1</code></li>
<li>add the <code>INSERT INTO [dbo].[Tenants] ([ID], [Name], [Key]) VALUES (1, 'First Tenant', 'FirstTenant')</code> after the <code>CREATE TABLE [dbo].[Tenants] (...</code></li>
</ul>
</li>
</ul>
<ol start="5">
<li>Run the <em>Publish Script</em> against the database to execute the refactor [<em>Tag on GitHub:</em> <a href="https://github.com/iQuarc/Code-Design-Training/tree/mt-shared-step1/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com"><code>mt-shared-step5</code></a>]</li>
</ol>
<ul>
<li>after this we could update the database project with the changes we did to the database with this refactor</li>
</ul>
<h2 id="isolatetenantdatathroughdataaccess">Isolate Tenant Data through Data Access</h2>
<p>The next step is to make sure that tenant data is isolated. This means that each query or command we send to the database must have a <code>WHERE TenantID = ...</code> clause appended. Here, having a well encapsulated data access implementation makes a huge difference, because it assures us that all the queries and commands go through it, so we can intercept them to append the <code>WHERE</code>.</p>
<p>If we are using a Linq based data access, we can build a Lambda Expression for the <code>TenantID = currentTenantId</code> filter and add a <code>.Where()</code> with it, to <code>IQueryable&lt;T&gt;</code> we're going to pass to the caller. If the data access is built on top of Entity Framework (EF) this simplifies to adding the <code>.Where()</code> to the <code>DbContext.DbSet&lt;T&gt;</code> property.</p>
<p>Lets take the <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc.DataAccess</a> as an example for the data access implementation (for more on this, you can go to my older post <a href="https://oncodedesign.com/separating-data-access-concern/">&quot;Separating Data Access Concern&quot;</a>). We should modify the <code>Repository.GetEntities&lt;T&gt;()</code> and the <code>UnitOfWork.GetEntities&lt;T&gt;()</code> functions, from:</p>
<pre><code class="language-language-csharp">public class Repository : IRepository, IDisposable
{
  public IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class
  {
    return Context.Set&lt;T&gt;().AsNoTracking();
  }
...
}
</code></pre>
<p>into</p>
<pre><code class="language-language-csharp">public class Repository : IRepository
{
     public IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class
     {
         int tenantId = GetCurrentUserTenantId();
         Expression&lt;Func&lt;T, bool&gt;&gt; condition = BuildWhereExpression&lt;T&gt;(tenantId);

         return Context.Set&lt;T&gt;()
                  .Where(condition)
                  .AsNoTracking();
     }
...
}
</code></pre>
<p>Actually, we don't have to modify the existent <code>Repository</code> or <code>UnitOfWork</code> classes. We could apply the <a href="https://oncodedesign.com/training-design-patterns/#Decorator"><em>Decorator</em> pattern</a> and write a <code>MultitenancyRepository</code> which wraps the existent implementation and does the filtering based on the current tenant.</p>
<p>Lets look at the two helper functions we've introduced.</p>
<p>The <code>GetCurrentUserTenantId()</code> is similar with the one from the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part2/">previous post</a>, where we've looked at <em>separate databases</em> strategy implementation.</p>
<pre><code class="language-language-csharp">private int GetCurrentUserTenantId()
{
    const string tenantKeyClaim = &quot;tenant_key&quot;;
    Claim tenantClaim = ClaimsPrincipal.Current.FindFirst(tenantKeyClaim);
    int ternanId = tenantsCache[tenantClaim.Value];
    return ternanId;
}
</code></pre>
<p>It relays on the existent <code>tenant_key</code> claim of the current user, which should be set by the authentication mechanism. Then, it uses a cache build from the <code>Tenants</code> table to return the <code>tenantId</code> which corresponds to the key. Nothing fancy.</p>
<p>The <code>BuildWhereExpression&lt;T&gt;()</code>is a bit more complex. It needs to build a <a href="https://msdn.microsoft.com/en-us/library/system.linq.expressions.binaryexpression(v=vs.110).aspx?ref=oncodedesign.com">Binary Expression</a> with the <em>equals</em> operator. The <em>left operand</em> should be the <code>TenantID</code> property of the current entity and the <em>right operand</em> the <code>tenantId</code> which is passed as a parameter. For the <code>Patient</code> entity this whould be: <code>patient.TenantID == tenantId</code>.</p>
<p>This means that all the <em>tenant specific</em> entities should have the <code>TenantID</code> property mapped to the <code>TenantID</code> column. So we should change the EF code generator to make these entities implement the <code>ITenantEntity</code> interface</p>
<pre><code class="language-language-csharp">interface ITenantEntity
{
    int TenantID { get; set; }
}
</code></pre>
<p>Having this, we can easily build the left operand and also we can know if the current entity is <em>tenant specific</em> or not. The operands are:</p>
<ul>
<li>left operand: <code>tenantIdSelector = entity =&gt; entity.TenantID;</code></li>
<li>right operand: <code>tenantIdParam = () =&gt; tenantId;</code></li>
</ul>
<p>so the entire function code goes like this:</p>
<pre><code class="language-language-csharp">private Expression&lt;Func&lt;T, bool&gt;&gt; BuildWhereExpression&lt;T&gt;(int tenantId)
{
    if (IsTenantEntity&lt;T&gt;())
    {
        Expression&lt;Func&lt;ITenantEntity, int&gt;&gt; tenantIdSelector = entity =&gt; entity.TenantID;
        Expression&lt;Func&lt;int&gt;&gt; tenantIdParam = () =&gt; tenantId;

        var filterExpression= Expression.Lambda&lt;Func&lt;T, bool&gt;&gt;(
            Expression.MakeBinary(ExpressionType.Equal,
                Expression.Convert(tenantIdSelector.Body, typeof(int)),
                tenantIdParam.Body),
            tenantIdSelector.Parameters);

        return filterExpression;
    }
    else
    {
        Expression&lt;Func&lt;T, bool&gt;&gt; trueExpression = entity =&gt; true;
        return trueExpression;
    }
}
</code></pre>
<p>If the current entity type is not implementing <code>ITenantEntity</code> interface, then on the <code>else</code> branch we will just build and return an expression which is always <code>true</code> so the <code>.Where()</code> we append has no effect for the <em>tenant shared</em> tables.</p>
<p>Now, if we do the same for the <code>UnitOfWork</code> class we have consistently achieved data isolation for each tenant data at the data access level.</p>
<p>You can see a full running demo of this implementation if you go to my Code Design Training GitHub repository and open the <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/MultitenancySamples/SharedDbConsoleDemo?ref=oncodedesign.com">SharedDbConsoleDemo</a> sample.</p>
<h5 id="morediscussionsmultitenancydesignarepartofmycodedesigntraining">More discussions multitenancy design are part of my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="featuredimagecreditalinoubighvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_alinoubigh?ref=oncodedesign.com">alinoubigh via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Data Isolation and Sharing in a Multitenant System - Part 2 ]]>
            </title>
            <description>
                <![CDATA[ This post is the second part of the article that shows how a well encapsulated data access implementation can play a key role in implementing a multitenant application. It is also a continuation of my post that outlines the additional benefits such a data access implementation may bring, multitenancy being ]]>
            </description>
            <link>https://oncodedesign.com/blog/data-isolation-and-sharing-in-multitenant-system-part2/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9e</guid>
            <category>
                <![CDATA[ multitenancy ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 14 Feb 2017 09:09:38 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1485285708/separate-databases-multitenancy.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>This post is the second part of the article that shows how a well encapsulated data access implementation can play a key role in implementing a multitenant application. It is also a continuation of my <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/">post</a> that outlines the additional benefits such a data access implementation may bring, multitenancy being one of them.</p>
<p>The previous post (the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part1">first part</a> of the article) focuses on presenting the main two strategies for addressing <em>Data Isolation and Sharing</em> in a multitenant application:</p>
<ul>
<li><strong>Separate Databases</strong> one for each tenant, or</li>
<li>a <strong>Shared Database</strong> used by all tenants</li>
</ul>
<p>It also shows how we can make an informed decision on which of these strategies fit best a certain context.</p>
<p>Now, we go into the implementation details for these strategies. Both are in C# and rely on a well encapsulated data access implementation, for example one like the <a href="http://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc Data Access</a>, which I've presented a while ago in <a href="https://oncodedesign.com/separating-data-access-concern/">this post</a>. This post covers the first strategy and the next one the second.</p>
<p>We'll use the same example as in the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part1/#dataisolationandsharing">first part</a>: the multitenant application used in clinics for physiotherapy, where the clinics are the tenants, the patient related data represents <em>tenant specific data</em> and the diagnostics or common exercises related data is the <em>tenant shared data</em>.</p>
<h2 id="separatedatabasesimplementation">Separate Databases Implementation</h2>
<p>With this strategy we'll have one database for each tenant. If we'll have <em>n</em> tenants we'll have <em>n</em> databases. When a new tenant comes, we'll have to create a new database for it.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_650/w_650,h_640,c_pad,g_north/l_text:PT%20Sans_20:Multitenancy%20-%20Separate%20Databases,g_south,co_rgb:333333/multitenancy-separate-databases.png" alt="Multitenancy - Separate Databases" loading="lazy"></p>
<p>Being in a multitenant architecture, we have the same instance of the application serving all the users from all the tenants. Based on the tenant to which the current user belongs to, the application should connect to the database that corresponds for that tenant. To maximize the cost savings that the multitenancy may bring we'll have the same database schema (same version) for all the databases from all the tenants.</p>
<p>At the database schema level, we do not need to do any changes from a design of a non-multitenant application. Each database will have the tables both for the <em>tenant specific data</em> and for the <em>tenant shared data</em>. The <em>tenant shared data</em> will be duplicated in all the databases. Also the <code>DataModel</code> assembly, which contains the DTOs that are mapped by EF to the tables of the database will suffer no changes. Moreover, having the same database schema for all tenants, means that we can have the same <code>DataModel</code> for all. (for more details on why the EF DTOs would be generated in a separate assembly <code>DataModel</code> from the data access implementation <code>DataAccess</code> you can refer to the <a href="https://oncodedesign.com/separating-data-access-concern/">Separating Data Access Concern</a> post)</p>
<p>The only place where we need to do intervene to implement multitenancy with this strategy is in the <code>DataAccess</code>. More precisely where we create a connection to the database. This is the <strong>only</strong> place, because we are relying on a well encapsulated data access implementation, which assures us that only here a connection to the database is made. Otherwise, we would have to go through all the services or controllers that might create an entity framework <code>DbContext</code> (or a connection to the database with other means) and deal with those places as well.</p>
<p>First we need a configuration of the connection strings which allows us to identify them for each tenant. We'll keep them in the config file.</p>
<p>One option is to have more connection strings and use a convention for the <code>name</code> attribute based on the <code>tenant_key</code>.Something like:</p>
<pre><code class="language-language-markup">&lt;connectionStrings&gt;
  &lt;add name=&quot;Tenant1_PhysioDb&quot; connectionString=&quot;...&quot; /&gt;
  &lt;add name=&quot;Tenant2_PhysioDb&quot; connectionString=&quot;...&quot; /&gt;
  ...
&lt;/connectionStrings&gt;
</code></pre>
<p>Another option is to have a template connection string and to have a convention for the database name based on the <code>tenant_key</code>. The application will have to replace the <code>&lt;tenant_key&gt;</code> with the key of the current tenant, before the connection string will be used. This would look like:</p>
<pre><code class="language-language-markup">&lt;connectionStrings&gt;
  &lt;add name=&quot;PhysioDb&quot; 
     connectionString=&quot;...;initial catalog=&lt;tenant_key&gt;_PhysioDb;...&quot; /&gt;
&lt;/connectionStrings&gt;
</code></pre>
<p>This option works fine if we deploy all the databases on the same server and we'll use the same credentials for the application to connect to any of the databases. Otherwise, the template gets too complicated.</p>
<p>Next we need to make the application to use the connection string based on the tenant of the current user. The Entity Framework (EF) <code>DbContext</code> receives into the constructor the name of the connection string to use. We need to change the code generator, to make this constructor public for the class it generates for our specific data model (which inherits the <code>DbContext</code>). It should look like:</p>
<pre><code class="language-language-csharp">public class PhysioEntities : DbContext
{
    public PhysioEntities(string nameOrConnectionString)
        : base(nameOrConnectionString)
    {
    }

    public virtual DbSet&lt;PatientFile&gt; PatientFiles { get; set; }
    public virtual DbSet&lt;RehabSession&gt; RehabSessions { get; set; }
    ...
}
</code></pre>
<p>Having a well encapsulated data access, it means that the <code>PhysioEntities</code> class is hidden from the rest of the code. It is instantiated inside the data access and used by a <em>Repository</em> or an <em>Unit of Work</em> implementation. If we look into the <em>iQuarc Data Access</em> library <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">code</a>, we see that this is already  well separated. The <a href="https://github.com/iQuarc/DataAccess/blob/master/src/iQuarc.DataAccess/IDbContextFactory.cs?ref=oncodedesign.com"><code>IDbContextFactory</code></a> abstraction takes the responsibility of constructing the context, and the <code>Repository</code> or <code>UnitOfWork</code> clases only use it as a <code>DbContext</code> instance. (An application of the <em>Separate construction and configuration from use</em> principle.) Here is a simplified snippet of this code.</p>
<pre><code class="language-language-csharp">public interface IDbContextFactory
{
    DbContext CreateContext();
}

public class Repository : IRepository, IDisposable
{
    private IDbContextFactory contextFactory;    
    public Repository(IDbContextFactory contextFactory, IInterceptorsResolver interceptorsResolver, IDbContextUtilities contextUtilities)
    {
        this.contextFactory = contextFactory;
        ...
    }

    private DbContext context
    protected DbContext Context
    {
        get 
        { 
            if (context == null)
                context = contextFactory.CreateContext()    
            return contextBuilder.Context; 
        }
    }

    public IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class
    {
        return Context.Set&lt;T&gt;().AsNoTracking();
    }
    ...
}
</code></pre>
<p>If we have this abstraction already done, it means that we don't even need to change the core code of the data access implementation to add the multitenanacy. We only need to come with another implementation for the <code>IDbContextFactory</code>. If we don't, we just need to create the abstraction as above. It's simple, just delegate the <code>new PhysioEntities()</code> code to the <code>IDbContextFactory.CreateContext()</code>.</p>
<p>The multitenancy <code>IDbContextFactory</code> implementation, will have to take the <code>tenant_key</code> of the current user, to determine the connection string to use and to instantiate the <code>PhysioEntities</code> with it. Here is a snippet of the code:</p>
<pre><code class="language-language-csharp">public class MultitenancyDbContextFactory : IDbContextFactory
{
    public DbContext CreateContext()
    {
        string tenantKey = GetTenantKeyFromCurrentUser();
        string connectionName = $&quot;{tenantKey}_PhysioDb&quot;;
        return new PhysioEntities(connectionName);
    }

    private string GetTenantKeyFromCurrentUser()
    {
        const string tenantKeyClaim = &quot;tenant_key&quot;;
        Claim tenantClaim = ClaimsPrincipal.Current.FindFirst(tenantKeyClaim);
        return tenantClaim.Value;
    }
}
</code></pre>
<p>Next we only need to register in the DI Container the <code>MultitenancyDbContextFactory</code> class as the default implementation of the <code>IDbContextFactory</code>, so the <code>Repository</code> and <code>UnitOfWork</code> use it, and we're done.</p>
<p>One thing to notice is that we take the <code>tenant_key</code> from the claims of the current user. We expect that an authenticated user will have this claim, with the correspondent value for the tenant she belongs to. This should be assured by the authentication part, which is another important aspect in a multitenant application. We should have a separate <em>Identity Provider Service</em> which handles the identity management and user authentication. For each authenticated user it should add this <code>tenant_key</code> claim, so our application can use it.</p>
<p>The <em>Identity Management</em> and <em>Authentication</em> in a multitenant application is not in the scope of these posts, but <a href="https://docs.microsoft.com/en-us/azure/guidance/guidance-multitenant-identity?ref=oncodedesign.com">here</a> is a good guidance on how to do this using <a href="http://openid.net/?ref=oncodedesign.com">OpenID Connect</a>, and <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-whatis?ref=oncodedesign.com">Azure AD</a> as the <em>Identity Provider Service</em>. <a href="https://github.com/IdentityServer?ref=oncodedesign.com">Identity Server</a> is an alternative to Azure AD, which you can host yourself.</p>
<p>Another important aspect for this <em>Data Isolation and Sharing</em> strategy is to maintain the database schemas in sync for all the databases of all the tenants. The best way to do this is to rely on a tool which can generate SQL scripts that create the schema of the database. These should include the tables, view, stored procedure, system data and everything else. The <a href="https://msdn.microsoft.com/en-us/library/hh272686(v=vs.103).aspx?ref=oncodedesign.com">SQL Server Data Tools</a> (also known as Visual Studio Database Project) is a good tool for this. Once we generate the database schema, we can add it in git and use it as the single source of truth for the state of the database. With this, when we have a new tenant is easy to create its database. Even more, such a tool can also generate a diff T-SQL script which results from comparing the one in git with a deployed database. This diff T-SQL could update the schema of the deployed database. If we integrate this into the tools that do the deployments in all our environments (testing, acceptance, production) the overhead of having more databases and keeping their schema in sync can be very much reduced.</p>
<h2 id="closing">Closing</h2>
<p>This covers the implementation of the <em>Separate Databases</em> strategy for data isolation and sharing in a multitenant application. In the next post, which is the third part of this article, I will detail the implementation of the other strategy we've discussed in the <a href="https://oncodedesign.com/data-isolation-and-sharing-in-multitenant-system-part1">first part</a>, the <em>Shared Database</em> strategy.</p>
<h5 id="morediscussionsonimplementationsformultitenancyarepartofmycodedesigntraining">More discussions on implementations for multitenancy are part of my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="featuredimagecreditalinoubighvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_alinoubigh?ref=oncodedesign.com">alinoubigh via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Comments Opened! ]]>
            </title>
            <description>
                <![CDATA[ When I&#39;ve migrated my blog to Ghost (My Wordpress to Ghost Journey) I&#39;ve left a few things not done. Quite a few actually. Integrating a comments service was one of them.


Last year, I didn&#39;t have much time for the blog, and the little ]]>
            </description>
            <link>https://oncodedesign.com/blog/comments-opened/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9f</guid>
            <category>
                <![CDATA[ ghost ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 31 Jan 2017 16:49:53 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1485873722/Blog-Comments.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>When I've migrated my blog to Ghost (<a href="https://oncodedesign.com/my-wordpress-to-ghost-journey/">My Wordpress to Ghost Journey</a>) I've left a few things not done. Quite <a href="https://oncodedesign.com/my-wordpress-to-ghost-journey/#todos">a few</a> actually. Integrating a comments service was one of them.</p>
<p>Last year, I didn't have much time for the blog, and the little time I had went into writing posts, so the TODOs list remained quite the same. Now, its time take them one by one, so expect some more improvements soon.</p>
<p>Today I've integrated <a href="https://disqus.com/?ref=oncodedesign.com">Disqus</a>, and <strong>I invite you to comment on my posts!</strong></p>
<p>Below are a few technical details on how this went in my case.</p>
<h2 id="integratingdisqusintoaghostblog">Integrating Disqus into a Ghost blog</h2>
<p>As expected it was very simple to add Disqus. I've just followed the steps from <a href="http://academy.ghost.org/adding-disqus-to-your-ghost-blog/?ref=oncodedesign.com">this tutorial</a>, which even if it is a bit out of date, it tells very well the steps that are needed.</p>
<p>After I've created the account and accepted the policy, at the <em>Select Platform</em> step, I had Ghost as an option,</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Select-Platform.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Select-Platform.png" alt="Disqus Select Platform" loading="lazy"></a></p>
<p>so I didn't go for the <em>Universal Code</em>, like the tutorial says, even if I think it would have been pretty much the same.</p>
<p>Next, at the <em>Install Instructions</em> Disqus tells the exact installation steps:<br>
<a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Installation-Instructions.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Installation-Instructions.png" alt="Disqus Install Instructions" loading="lazy"></a></p>
<p>Howerver, the step 2, is not very accurate. Actually, it should say to insert the code anywhere between the <code>{{#post}}</code> and its ending tag <code>{{/post}}</code>.</p>
<p>Depending on your theme, you might have a footer section, like I have, and you'd probably want the Disqus comments to go in there and not at the end of the article. In my case, I wanted it inside the footer, so it gets its styling, but after the <em>author</em> and the <em>share this post</em> sections, so the code went inside <code>&lt;footer&gt;</code> which is inside <code>{{#post}}</code>.</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Code.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Code.png" alt="Disqus Code" loading="lazy"></a></p>
<p>and the result is<br>
<a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Comments.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Comments.png" alt="Disqus Comments" loading="lazy"></a></p>
<p>For the <em>Configuration Variables</em> from the <code>discus_config</code> function, I've just followed the instructions from the tutorial, so I had:</p>
<pre><code class="language-language-javascript">var disqus_config = function () {
    this.page.url = '{{url absolute=&quot;true&quot;}}';  
    this.page.identifier = 'gh-{{id}}';
    };
</code></pre>
<p>The <code>page.identifier</code> might require some thinking. If we look into <a href="https://help.disqus.com/customer/en/portal/articles/472098-javascript-configuration-variables?ref=oncodedesign.com#thispageidentifier">Disqus documentation</a> it says that this variable is used to uniquely identify the comments thread that should be shown in the post. So, indeed the <code>{{post.id}}</code> seems a good idea. However, this <code>id</code> is nothing more than an <code>integer</code> that is incremented by ghost with each page or post we add. What if I will migrate from Ghost to another platform? The alternative would be not to use it and to rely only on the <code>post.url</code>, which change is supported by <a href="https://help.disqus.com/customer/portal/articles/286778-migration-tools?ref=oncodedesign.com">Disqus Migration Tools</a>. However, on the other hand, I might need to change the URL before a migration for other reasons, like a title rename, or a typo. In the end, I've concluded that I'll use it with the <code>gh-</code> prefix, because:</p>
<ul>
<li>it is more probable to change the post URL due to a typo than to do a migration ;)</li>
<li>even if I migrate, I could probably keep this ID on the new platform. If not, it will just not be set. If it is not set, Disqus will fall back and use the URL. So, after an eventual migration, I should not change the URLs of the migrated posts.</li>
<li>the <code>gh-</code> prefix makes it more explicit saying that this is an ID generated by Ghost. Plus, I can use another prefix like <code>dev-gh-</code> when I test the comments from my test env, using the same Disqus account.</li>
</ul>
<p>The last thing was to do a quick test and look at the page source to see if these variables are set as expected:</p>
<p><a href="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Page-Source.png?ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/Disqus-Page-Source.png" alt="Disqus Page Source" loading="lazy"></a></p>
<p>And, with this you can comment on my blog, including on the previous posts. If you have questions, comments or suggestions on this, you can, now, directly ask them below.</p>
<h6 id="featuredimagecreditrawpixelvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_rawpixel?ref=oncodedesign.com">rawpixel via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Data Isolation and Sharing in a Multitenant System - Part 1 ]]>
            </title>
            <description>
                <![CDATA[ In the previous post I&#39;ve shown few of the additional benefits a well encapsulated data access implementation brings. Implementing multitenancy is one of them. This article continues the previous, picks up the multitenancy context from there, and shows how we can address one of the most important design ]]>
            </description>
            <link>https://oncodedesign.com/blog/data-isolation-and-sharing-in-multitenant-system-part1/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9d</guid>
            <category>
                <![CDATA[ multitenancy ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Wed, 25 Jan 2017 09:11:38 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1485285708/data-isolation-and-sharing-multitenancy.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In the <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/">previous post</a> I've shown few of the additional benefits a well encapsulated data access implementation brings. Implementing multitenancy is one of them. This article continues the previous, picks up the <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/#multitenancy">multitenancy context</a> from there, and shows how we can address one of the most important design challenges of multitenant applications, which is <em>Data Isolation and Sharing</em>.</p>
<p>The article turned out quite long, so I have structured it in three parts. This first part focuses on describing the two strategies for <em>Data Isolation and Sharing</em> and more important on how to choose between them. The second and the third parts will show the implementation details on for each strategy.</p>
<h2 id="multitenancy">Multitenancy</h2>
<p>Before we go into details, lets review a bit what multitenancy is. The <a href="https://en.wikipedia.org/wiki/Multitenancy?ref=oncodedesign.com">wikipedia</a> definitions says:</p>
<blockquote>
<p>The term <strong>software multitenancy</strong> refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants.</p>
</blockquote>
<blockquote>
<p>A tenant is a group of users who share a common access with specific privileges to the software instance.</p>
</blockquote>
<blockquote>
<p>With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance.</p>
</blockquote>
<p>We can think of a tenant as an organization which is a customer of our application. The users that belong to that organization is the group of users that form that tenant.</p>
<p>We do multitenant systems because they allow for cost savings. If we have one instance of the application for all our customers we may save money on hardware, software licenses and operational costs. Almost any application has a certain overhead on the resources it needs. If we spread it over more customers of the same app instance we can reduce it. There is memory and processing which is usually not used or used only at peak time; the effort to monitor and operate one instance of the app does not grow with its size as it grows with monitoring more instances of the same app; similar, the number of licences (servers and tools) is more dependet on the number of instances rather than on the size of one instance. All these make a good case for multitenancy architectures over multi-instance architectures. A perfect case for SaaS.</p>
<p>The cost savings can be eclipsed by two main things:</p>
<ul>
<li>the difficulty of scaling, and</li>
<li>the difficulty of satisfying each tenant specific needs.</li>
</ul>
<p>The one application instance should be able to scale when demand grows, when we add more tenants. There are many aspects to consider when designing for scalability, but I won't detail them in this post. For the second, we should aim for having the same functionality for all the tenants. If one client asks for a new feature and we implement it, then all clients will get it. Maybe some will not use it, or we configure the system, so that they don't have access to it, but it will be there. This also points another important aspect in a multitenant design, which is configurability.</p>
<h2 id="dataisolationandsharing">Data Isolation and Sharing</h2>
<p>At this point, we should have a good understanding of what multitenancy is and when it makes sense to implement a multitenant architecture. Now, lets focus on the <em>Data Isolation and Sharing</em> aspect, and go back to the scenario from the <a href="https://oncodedesign.com/benefits-of-data-access-encapsulation/">previous post</a>.</p>
<p>To make it simpler, lets take an example. Lets think of an application which is used in clinics for physiotherapy. The application provides functionalities to assist the doctor, the physiotherapist and the patient during the recovering from surgery or injury. The doctor may give prescriptions, and the physiotherapist may compose a set of exercises that the patient should do, and also to monitor the patient progress. Such an application may be used by large hospitals and at the same time by very small physiotherapy clinics. They would all need a similar functionality, so it makes sense to make it a multitenant application, where each clinic or hospital is a tenant. (This is a simplified example of what we do at <a href="http://www.mirarehab.com/?ref=oncodedesign.com">MIRA</a>)</p>
<p>Here, we'll have data which belongs to each tenant. For example all the patient related data, which includes: the <em>patient profile</em>, the <em>patient file</em>, the <em>progress</em> the patient had during any rehabilitation program at that specific clinic, etc. These are tables like <code>Patient</code>, <code>PatientFile</code>, <code>RehabProgram</code>, <code>PatientHistory</code> etc. This data needs to be isolated for each tenant. Each clinic has to feel like that it is the only one on the platform, so a doctor at a clinic cannot see the patients of another clinic and so on. At  the same time there is also data which should be shared by all tenants. Here we may have <em>diagnostics</em>, a common set of <em>exercises</em>, a common set of <em>tools</em>, etc. These are tables like: <code>Diagnostics</code>, <code>Exercises</code>, <code>RehabTools</code> etc. This is typically system data, which we want to be used by all tenants and we want that any changes of it to be available to all tenants. Usually it is maintained by system administrators, but there are cases when we want that users that belong to any tenant to be able to edit it. For example we might want that any doctor to be able to enrich the diagnostics that the application knows.</p>
<p>We'll find the same in any multitenant application. We're always have:</p>
<ul>
<li><strong>tenant specific data</strong> that needs to isolated for each tenant, and</li>
<li><strong>tenant shared data</strong> which needs to be shared between tenants</li>
</ul>
<p>Another good case for multitenancy are applications that are used in the insurance field. By hosting on the same application instance, large insurance companies with the very small ones we could reduce the operational costs by sharing the resources. <strong>Tenant specific data</strong> would be the: <em>insurance products</em>, the <em>policies</em> which belong to each insurance company (tenant) etc. and <strong>tenant shared data</strong> would be the <em>regulations</em> data or the <em>actuarial tables</em> (which are used by all insurance companies to calculate the premiums), etc.</p>
<h2 id="separatedatabasesorshareddatabase">Separate Databases or Shared Database</h2>
<p>Once we've identified the <em>tenant specific data</em> and the <em>tenant shared data</em>, the next step would be to decide between the two main strategies for storing data:</p>
<ul>
<li><strong>Separate Databases</strong> one for each tenant, or</li>
<li>a <strong>Shared Database</strong> used by all tenants</li>
</ul>
<p>Both of the above strategies have pluses and minuses, and there are many tradeoffs to consider when making this choice.</p>
<p>Having a <strong>Separate Database</strong> for each tenant is easier to implement, especially in the case when we add multitenancy at a later stage of the project (after part of the functionality was already implemented as if this would not be a multitenant app). In this case each tenant will have access only to its own database and the data schema should not be changed from the non-mutitenant version.</p>
<p>This is also simple for isolating the tenants data. Based on the current user, we know the tenant it belongs to and we'll connect to that tenant database. So there no risk that we'll mix multiple tenants data. (Of course if we have caching at higher levels those should also be tenant aware, but that's another topic.)</p>
<p>To maximize the cost benefits of the multitenancy app, we should keep identical schemas for each database of each tenant. This means that when we make a schema change (because we add a column for a new feature) this change needs to be applied to all the databases. Therefore, we'll have the same database version (in fact the same app version) for all the tenants. This means that our automated deployment tool will need to upgrade the schemas of all the databases when we deploy a new version in testing, acceptance or production environments.</p>
<p>For the <em>tenant shared data</em> one approach is to duplicate it in the databases of each tenant. This means that we need to synchronize its changes on all the databases. If it is only system data, then this may be done in the same process with updating the databases schemas from version to version. Otherwise, if users can change it (operational data), then we need to put a sync mechanism in place. In most of the cases it does not need to be a real-time sync, because changes done by one tenant, on the shared data, do not need to be instantaneusly available to the others.</p>
<p>Another approach is to have a separate database for the <em>tenant shared data</em> only. So for a tenant we will connect to its own database for its specific data and to the tenant common database for the shared data. We keep the simplicity on isolating the tenant data and we don't need to do any data sync. However, we would need to take out the <em>tenant shared data</em> tables from the tenant specific database and use some GUID as IDs to link to it. This approach may make even more sense if we have a large set of <em>tenant shared data</em>, which frequently changes and it which is not very interconnected with the tenant data. With this approach we could even consider a non-relational database for the tenant common database.</p>
<p>The other strategy, a <strong>Shared Database</strong> used by all tenants, means that we keep data from all the tenants in the same database. To isolate tenant specific data, we will have to add a discriminator column like <code>TenantID</code> to every table which is tenant specific, and to make sure that all the queries and commands will filter the data based on it.</p>
<p>With this strategy dealing with <em>tenant shared data</em> is simple, we just don't filter it. Isolating data is what we need to deal with. For this we need to make sure that <strong>ALL</strong> the queries and the commands that deal with <em>tenant specific data</em> get filtered by the <code>TenantID</code>. Here, having a well encapsulated data access through which all the queries and command go can play a key role in assuring this. The data access would be the place where we intercept each query and command, we determine if it deals with tenant specific tables, and if yes we append a <code>WHERE</code> clause that filters it based on the current tenant. With Linq we would like to add something like:</p>
<pre><code class="language-language-csharp">
    int currentTenantId = GetCurrentUserTenanatId();
    IQueryable&lt;PatientFile&gt; originalQuery
           .Where(p =&gt; p.TenanatID == currentTenantId);
</code></pre>
<p>In the second part of this article (next post), I will detail this code and show how we can make it generic as part of the data access implementation.</p>
<p>This strategy is easier to implement if we know from the beginning that we should build a multitenant system, because we will consider it while designing the database and it will lead to a better design. Also it will be easier to set and maintain the conventions based on which we can implement the generic code that appends the tenant filter. This doesn't mean that we cannot add it at a later stage of the project, <strong>IF</strong> we can rely on a well encapsulated data access and if we have a good database design in place. After we decide which is <em>tenant specific data</em> and which is <em>tenant shared data</em>, we can build some SQL scripts that add the <code>TenantID</code> column. This database refactoring may be more difficult if we're already in production, because we might also need to migrate production data to the new schema of the database.</p>
<p>Both these strategies have pluses and minuses and the choice which one is better depends on many aspects including the application field and business, the moment when multitenancy should be added and also on the advantages we want to achieve from the multitenancy model. Below tables illustrate a summary of the benefits and the liabilities of these two main strategies for isolating and sharing data in a multitenant system:</p>
<p><strong>Separate Databases</strong></p>
<table>
<tr>
  <th>Benefits</th><th>Liabilities</th>
</tr>
<tr> 
    <td>
        Easy to implement (especialy at a later stage)</br></br>
        High data isolation </br></br>
    </td>
    <td>
        <i>Tenant shared data</i> is duplicated accros tenant databases </br></br>
        Higher costs for maintaining the <i>tenanat shared data</i> </br></br>
        Higher operational costs due to more databases </br></br>
        Higher costs for deploying new versions </br></br>
    </td>
</tr>
</table>
<p><strong>Shared Database</strong></p>
<table>
<tr>
  <th>Benefits</th><th>Liabilities</th>
</tr>
<tr> 
    <td>
        Higher level of resurce sharing, which may lead to lower costs</br></br>
        Development, maintainability and operational costs do not depend on the number of tenants </br></br>
        <i>Tenant shared data</i> is easy to maintain </br></br>        
    </td>
    <td>
        Low data isolation - requires a data access layer that intercepts all queries and commands </br></br>
        Monitoring tenanat data activity is a challenge </br></br>
        Backup and restore a single tenanat data requires a custom solution </br></br>
   </td>
</tr>
</table>
<p>The <em>tenant shared data</em> may play an important role on deciding between these two strategies. For example, an application where the <em>tenant shared data</em> is operational data (users from all tenants may modify it), and it represents a significant part from the entire application data model, and it is highly connected (related) to the <em>tenant specific data</em>, it makes a good case for the <em>Shared Database</em> strategy. In the same example, where the <em>tenant shared data</em> is operational data, is big, but it is not very related to the <em>tenant specific data</em> we could go for the <em>Separate Databases</em> strategy, even more if we're adding multitenancy later or if we have very few tenants with large data sets or if we cannot assure data isolation through other means.</p>
<p>There are many other metrics we should measure or estimate before making a decision on one of these strategies. Here are a few:</p>
<ul>
<li>the <strong>number of tenants</strong></li>
<li>the number of <strong>users per tenant</strong></li>
<li>amount of data or data <strong>transactions per tenant</strong></li>
<li><strong>frequency of adding tenants</strong> or removing (disabling) tenants</li>
<li>the <strong>ratio between small and big tenants</strong></li>
<li>the <strong>frequency of database schema changes</strong></li>
</ul>
<p>For example a scenario with many small tenants where we want to have the flexibility to add or disable tenants often, would make a good case for the <em>Shared Database</em> strategy. On the other hand the scenario with a fairly fixed number of tenants, which are big (in data amount and data transactions) and similar in size makes a better case for the <em>Separate Databases</em> strategy.</p>
<p>The development model may also play a role in making this decision. For example if we want to go very fast into production, with minimum business functionality and then increase the functionality by doing Continuous Delivery the <em>Shared Database</em> strategy may be a better choice because we'll need to change the database schema quite often. On the other hand if we go into production after most of the functionality is done, and we don't foresee many changes to the database schema <em>Separate Databases</em> strategy may be a good choice because the liability of high costs on updating more databases schemas won't be paid that often.</p>
<p>So, if we put some numbers on these metrics and we balance the benefits and the liabilities of these two strategies we could make a good decision in our context. Even more, we could end up with a strategy which is a hybrid between these two if we have external constraints and we try to take benefits from both or to minimize the liabilities of one of them. For example, in a scenario with one big tenant and many other small tenants we might have one database for the big tenant and one shared for the small ones. With this separation we may target benefits in planning the activities for the backup and restore, or the schema changes for the big fish from the small ones. Another scenario which may lead to such an hybrid is when we go with the <em>Shared Database</em> and at a certain point we need to scale at the database level. We can create a new database and distribute the tenants among them.</p>
<h2 id="closing">Closing</h2>
<p>At this point we should have a good image on the strategies for the <em>Data Isolation and Sharing</em> in a multitenancy application and how to choose the one that suits best a given context. In the second part of the article (the next post) we will go in details on how to implement each of them, by looking at some code snippets that give an implementation direction based on a well encapsulated data access.</p>
<h5 id="morediscussionsondesigningformultitenancyhighscalabilityandsaasscenariosarepartofmycodedesigntraining">More discussions on designing for multitenancy, high scalability and SaaS scenarios are part of my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="featuredimagecreditalinoubighvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_alinoubigh?ref=oncodedesign.com">alinoubigh via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Benefits of Data Access Encapsulation ]]>
            </title>
            <description>
                <![CDATA[ We all know that achieving a good Separation of Concerns is good, because it increases the maintainability and it reduces the cost of changing our code. This pays off big time in long running projects, with large code bases where changing code is a big part of the coding activity ]]>
            </description>
            <link>https://oncodedesign.com/blog/benefits-of-data-access-encapsulation/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9c</guid>
            <category>
                <![CDATA[ data access ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 05 Jan 2017 08:40:41 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/BenefitsOfDataAccess.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>We all know that achieving a good Separation of Concerns is good, because it increases the maintainability and it reduces the cost of changing our code. This pays off big time in long running projects, with large code bases where changing code is a big part of the coding activity and where the predictability of the effects of our changes is very important.</p>
<p>A while ago I have written <a href="https://oncodedesign.com/separating-data-access-concern/">a post</a> that shows how to achieve a good separation of the <em>Data Access Concern</em>, even when using an ORM like Entity Framework. I've outlined there that just using the EF for the data access is not enough because it does not give consistency. I have also presented the <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc Data Access</a> library (available on <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">GitHub </a> and on <a href="https://www.nuget.org/packages/iQuarc.DataAccess/?ref=oncodedesign.com">NuGet.org</a>), which implements the <em>Repository</em> and <em>Unit of Work</em> patterns making a good encapsulation of the data access concerns and creating consistent development patterns on how the data access will be used from the above layers in the entire code base. We've used this library in many projects we've started since then, and we've seen many benefits on the cost of adding technical features later, because of the well encapsulated data access. In this post I'm going to give some examples of them.</p>
<p>The context of these examples is the one of large projects where after a few months of developing business functionalities (lets say data entry screens) we need to add some technical functionalities (let's say auditing each query execution). Without a central place through which all queries' execution goes, we would need to go back thorough all of the <em>Controllers</em> and the <em>Services</em> and log the query execution. So the effort to consistently add such a technical feature depends on the size of the project (code) at the moment we add it. The more <em>Controllers</em> and <em>Services</em> we have, the more places we need to modify and test, so the more effort is needed. Even more, the effort/cost is not linear to size, because complexity is not linear to size and the complexity grows because we don't have a consistent way to doing data access. Below graphic shows this:<br>
<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/12/CostExtendingDataAccess-2.png" alt="" loading="lazy"><br>
The red line represents the case where we need to go screen by screen and add the technical feature and the blue line represents the case where we have a well encapsulated data access implementation that allows such extensions. We see that in the second case the cost hardly grows by size, after we have such a data access implementation in place.</p>
<p>Now lets see the examples one by one. I'll list them here for an easier navigation. For some, I might write separate posts, so this one doesn't get way too long :)</p>
<ul>
<li><a href="#writeinanauditlogwhenanentitywasreadmodifiedordeleted">Write in an audit log when an entity was read, modified or deleted</a></li>
<li><a href="#dataaudit">Data Audit</a></li>
<li><a href="#datalocalization">Data Localization</a></li>
<li><a href="#multitenancy">Multitenancy</a></li>
<li><a href="#authorizationondatarecords">Authorization on data records</a></li>
</ul>
<hr>
<h3 id="writeinanauditlogwhenanentitywasreadmodifiedordeleted">Write in an audit log when an entity was read, modified or deleted</h3>
<p><strong>Context:</strong> <em>We need to write in an audit log the name of the user that has read, modified or deleted a <code>Patient</code> entity related data and the date and time when the action happened.</em></p>
<p>Without a centralized place through which all our queries and commands go, we would need to go through all our code where the <code>Patient</code> entity is added, modified or deleted, and make a call to the <code>IAuditLog</code> in a consistent way.</p>
<p>If we use the <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc Data Access</a> library, we can create an <em>entity interceptor</em> for this by implementing the <a href="http://github.com/iQuarc/DataAccess/blob/master/src/iQuarc.DataAccess/IEntityInterceptor.cs?ref=oncodedesign.com"><code>IEntityInterceptor&lt;T&gt;</code></a> interface, and that would be all:</p>
<pre><code class="language-language-csharp">[Service(nameof(PatientAuditLogInterceptor), typeof(IEntityInterceptor&lt;Patient&gt;)]
class PatientAuditLogInterceptor : EntityInterceptor&lt;Patient&gt;
{
    private readonly IAuditLog auditLog;

    public PatientAuditLogInterceptor(IAuditLog auditLog)
    {
        this.auditLog = auditLog;
    }

    public void OnLoad(IEntityEntry&lt;Patient&gt; entry, IRepository repository)
    {
        User user = GetCurrentUser();
        Patient patient = entry.Entity;
        auditLog.Write(AuditType.Read, $&quot;Patient data was read. Patient Name: {patient.Name}&quot;, user);
    }

    public void OnSave(IEntityEntry&lt;Patient&gt; entry, IUnitOfWork unitOfWork)
    {
        User user = GetCurrentUser();
        Patient patient = entry.Entity;

        if (entry.State == EntityEntryState.Added)
            auditLog.Write(AuditType.Added, $&quot;Patient was added. Patient Name: {patient.Name}&quot;, user);
        else
            auditLog.Write(AuditType.Added, $&quot;Patient was modified. Patient Name: {patient.Name}&quot;, user);
    }

    public void OnDelete(IEntityEntry&lt;Patient&gt; entry, IUnitOfWork unitOfWork)
    {
        User user = GetCurrentUser();
        Patient patient = entry.Entity;
        auditLog.Write(AuditType.Deleted, $&quot;Patient was deleted. Patient Name: {patient.Name}&quot;, user);
    }
...
}

</code></pre>
<p>The <em>entity interceptors</em> are nothing more than extension points that the <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">iQuarc Data Access</a> library provide (a simple <a href="https://oncodedesign.com/training-solid-principles/#Format">OCP</a> application). They are somehow similar with database triggers in the sense that we can write code that executes when an entity is saved, loaded or deleted, but it is at a higher level, not in the database. Even more, the code we write does not depend on the EF and it can sit in the Business Logic Layer (<a href="https://oncodedesign.com/training-solid-principles/#Format">DIP</a> application). You can find more details on how they work in my <a href="https://oncodedesign.com/separating-data-access-concern/">older post</a> or if you just look in the code on <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">GitHub</a>.</p>
<p>There are many other technical features we can easily add in the same way as we did in this example. Here we have implemented a <em><strong>specific</strong> interceptor</em>, which executes only for entities of type <code>Patient</code>, but we can also have <em><strong>global</strong> interceptors</em> that execute for entities of any type, like in the following example.</p>
<hr>
<h3 id="dataaudit">Data Audit</h3>
<p><strong>Context:</strong> <em>We need to set the <code>CreatedBy</code>, <code>LastModifiedBy</code>, <code>CreatedDate</code> and <code>LastModifiedDate</code> columns in most of the tables from the database. We add this feature at a later stage of the project.</em></p>
<p>To implement this in a generic way for all the entities we need to define an interface. Something like:</p>
<pre><code class="language-language-csharp">public interface IAuditable
{
    DateTime? LastModifiedDate { get; set; }
    DateTime CreatedDate { get; set; }
    string LastModifiedBy { get; set; }
    string CreatedBy { get; set; }
}
</code></pre>
<p>After we create the correspondent columns in the tables of the database (we can make a T-SQL script that adds them for all tables), the next step is to make all the DTOs, which the ORM maps to the tables, to implement this interface. If we use EF, we can easily modify the code generator to generate the entity classes to implement the interface and the audit columns to be mapped to its properties.</p>
<p>The above needs to be done regardless of the fact that we add this feature at the beging of the project or later. It is almost the same effort / cost.</p>
<p>However, to make sure that we implement this feature in a in a constant manner and these properties are correctly set in all of the <em>Controllers</em> and <em>Services</em> where we modify data, we can again benefit from having an encapsulated data access implementation, which gives us a central place to set this data. Again, by having this, we will not need to go through all the code we've already written, if we are at a later stage of the project.</p>
<p>This time we will make a <em><strong>global</strong> interceptor</em>, so it gets executed for all entities types.</p>
<pre><code class="language-language-csharp">[Service(&quot;AuditableInterceptor&quot;, typeof(IEntityInterceptor))]
public sealed class AuditableInterceptor : GlobalEntityInterceptor&lt;IAuditable&gt;
{
    public override void OnSave(IEntityEntryFacade&lt;IAuditable&gt; entity, IRepository repository)
    {
        var systemDate = DateTime.Now;
        var userName = GetUserName();

        if (entity.State == EntityEntryStates.Added)
        {
            entity.Entity.CreationDate = systemDate;
            entity.Entity.CreatedBy = userName;
        }

        entity.Entity.LastEditDate = systemDate;
        entity.Entity.LastEditBy = userName;
    }
...
}
</code></pre>
<p>The data access implementation, finds in the <em>Dependency Injection Container</em> all the implementations of <code>IEntityInterceptor</code>, including the above, and for each entity that was modified, deleted, added or loaded it calls the correspondent functions of each interceptor found. The <code>GlobalEntityInterceptor&lt;T&gt;</code> is just an implementation helper (<a href="https://oncodedesign.com/training-design-patterns/#Format">Template Method</a> design pattern applied), which casts each modified entity to <code>IAuditable</code> and if that succeeds it forwards the call to the specific implementation.</p>
<pre><code class="language-language-csharp">public abstract class GlobalEntityInterceptor&lt;T&gt; : IEntityInterceptor
    where T : class
{
    public abstract void OnLoad(IEntityEntryFacade&lt;T&gt; entry, IRepository repository);
    public abstract void OnSave(IEntityEntryFacade&lt;T&gt; entry, IRepository repository);
    public abstract void OnEntityRemoved(IEntityEntryFacade&lt;T&gt; entry, IRepository repository);

    void IEntityInterceptor.OnLoad(IEntityEntryFacade entry, IRepository repository)
    {
        if (entry.Entity is T)
            this.OnLoad(entry.Convert&lt;T&gt;(), repository);
    }

    void IEntityInterceptor.OnSave(IEntityEntryFacade entry, IRepository repository)
    {
        if (entry.Entity is T)
            this.OnSave(entry.Convert&lt;T&gt;(), repository);
    }
...
}
</code></pre>
<hr>
<h3 id="datalocalization">Data Localization</h3>
<p><strong>Context:</strong> <em>Our application is an e-commerce site which is in production. Now, we need to enter the French market, where most of the products we sell have different names in French than in English. Therefore, we need to add data localization.</em></p>
<p>I have written before about data localization, explaining what it is and how it can be implemented. In this <a href="https://oncodedesign.com/localization-concern/">older post</a> I give the starting point of implementing it as part of an encapsulated data access implementation. It fits well the above context. It parses each Linq and then recreates the lambda expression with a join to the translation table. Doing this we will not need to go through all the existent code where <code>Products</code> are read to modify it. Instead we intercept the existent queries and we rewrite them.</p>
<pre><code class="language-language-csharp">public class EfRepository : IRepository
{
...
  public IQueryable&lt;T&gt; GetEntities&lt;T&gt;(bool localized = true) where T : class
  {
     DbSet&lt;T&gt; dbSet = GetContext().Set&lt;T&gt;();
     return localized ? new LocalizedQueryable&lt;T&gt;(dbSet, this .cultureProvider) : dbSet;
  }
...
}
</code></pre>
<p>For more details on how to implement the <code>LocalizedQueryable&lt;T&gt;</code> to rewrite the Linq, you can look into the code of the <a href="https://github.com/iQuarc/DataLocalization?ref=oncodedesign.com">iQuarc Data Localization</a> library that I presented in the previous <a href="https://oncodedesign.com/data-localization-library/">post</a>.</p>
<hr>
<h3 id="multitenancy">Multitenancy</h3>
<p><strong>Context:</strong> <em>Our application is in a late stage of development, or even deployed in production, and we decide that the same instance of the application should be used by more clients. Therefore, we need to add support for the multitenancy scenario.</em></p>
<p>There are more strategies to implement a multitenant application. The most common are to have separate databases, one for each client (tenant), or to have one database shared by all clients and use a discriminant column like <code>TenanatID</code> to separate each tenant data. Both these strategies could be implemented later in a project, and having a well encapsulated data access where you can intervene to consistently implement it, makes a huge difference. The rest of the code will need minimum changes, given that you will want the same functionality to all your tenants and that your application was built with scalability in mind.</p>
<p>If you go with multiple databases strategy (one for each tenant), the data access implementation will be the place to decide to which database you will connect to execute the current query or command, based on the tenant of the current user. If you go with one database for all the tenants, the data access will be the place where you can intercept each Linq and rewrite it to append a <code>WHERE</code> condition that filters the data by the <code>TenantID</code> of the tenant to which the current user belongs to.</p>
<p>I will detail in a future post more about multitenancy, what should we consider when we choose a strategy and also give some code snippets on how to implement it taking the benefits of an encapsulated data access implementation.</p>
<hr>
<h3 id="authorizationondatarecords">Authorization on data records</h3>
<p><strong>Context:</strong> <em>The authorization rules say that users from certain roles can read, modify or delete only certain records of some entities. This means that data in the list screens, for example, needs to be filtered based on the access rights of the current user. We need to add this, at a later stage of the project.</em></p>
<p>In most of the applications when we implement the authorization (what the current user is allowed to do) we restrict some functionalities for some users. For example a user from the <em>Guest</em> role can only see the list of products. She does not have access to <em>Buy</em> functionality. Only users from the <em>Customer</em> role can buy products. Even more, the <em>Edit Product</em> functionality is only available to users from <em>Sales Manager</em> role.</p>
<p><em>Authorization on data records</em> goes a step further. It says that a <em>Sales Manager</em> user has access to <em>Edit Product</em> functionality, but she can edit ONLY the products that are at sale in the area managed by that user.</p>
<p>To implement this it means that we need to go in all the screens that show products for edit and filter them by some data from the current user.   Going back through all these screens and mix this authorization logic with existent queries might be costly and may result in a hard to maintain code.</p>
<p>However, if we have a well encapsulated data access through which all our queries go, it means that we could intercept them and based on some conventions rewrite the Linq to append a <code>WHERE</code> condition that filters the result by some data from the current user. Again, we can rely on the data access to add this functionality later.</p>
<p>I will detail this more and I'll also give some code snippets on how it could be done, in a future post.</p>
<hr>
<p>To summarize, we have seen some examples of some extra benefits an encapsulated data access brings. In all theses cases we could add technical functionalities at a later stage of the project with costs that do not depend on how much business functionality was already implemented.</p>
<p>All of the implementations rely on the fact that all the queries and the commands go through a central place: the data access implementation. This means that we can either use some extensibility points like the <em>entity interceptors</em> to execute custom code when data is loaded, modified or deleted; or we could rewrite the Linq to enrich it with the functionality we want. If our data access is implemented with a Linq based framework like EF, the resulted code, even if it may be complex, is testable and maintainable. Even more, this complexity remains separated from the business functionalities, somewhere in the infrastructure of our project, rather than being spread all over the code base.</p>
<h5 id="moreabouthowtoimplementanencapsulateddataaccessandhowtobenefitfromitisdiscussedinmycodedesigntraining">More about how to implement an encapsulated data access and how to benefit from it is discussed in my <a href="https://oncodedesign.com/training-code-design/%22">Code Design Training</a></h5>
<h6 id="featuredimagecreditalinoubighvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_alinoubigh?ref=oncodedesign.com">alinoubigh via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Data Localization Library ]]>
            </title>
            <description>
                <![CDATA[ A while ago I have written about the Localization Concern, and one of the topics that I have detailed there was Data Localization.


By Data Localization we mean the cases when we need to store some text properties of some business entities in different languages. For example in an e-commerce ]]>
            </description>
            <link>https://oncodedesign.com/blog/data-localization-library/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9b</guid>
            <category>
                <![CDATA[ localization ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 08 Dec 2016 08:59:35 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/12/Localization.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>A while ago I have written about the <a href="https://oncodedesign.com/localization-concern/">Localization Concern</a>, and one of the topics that I have detailed there was <em>Data Localization</em>.</p>
<p>By <em>Data Localization</em> we mean the cases when we need to store some text properties of some business entities in different languages. For example in an e-commerce app you may need to store the name and the description in more languages for the same product. The name of the mouse product will be <em>Mouse</em> in English, <em>Souris d’ordinateur</em> in French and maybe <em>Computermaus</em> in German.</p>
<p>In that post, I have emphasized why it is wise to separate this translation concern from the rest of the query logic and to build a generic mechanism that based on some conventions does the translation for all the queries. I've also given some code snippets that give the direction on how this can be implemented as part of a generic <code>Repository</code> or an <code>UnitOfWork</code> implementation.</p>
<p>Recently my colleague <a href="https://ro.linkedin.com/in/popcatalin?ref=oncodedesign.com">Catalin Pop</a> has published on our <a href="https://github.com/iQuarc?ref=oncodedesign.com">iQuarc Github</a> account a library that does just that. To quote him:</p>
<blockquote>
<p>The library provides a set of helper methods for querying localized data split in multiple tables. The library works by rewriting Linq that perform projections on the main table to retrieve data from localization tables when available.</p>
</blockquote>
<p>The library is very easy to use and it can be plugged in any Linq based data access framework. You install it via <a href="https://www.nuget.org/packages/iQuarc.DataLocalization/?ref=oncodedesign.com">NuGet</a> and you just add the <code>.Localize()</code> extension method after the <code>.Select()</code>, and boom: your data is translated.</p>
<pre><code class="language-language-csharp">var products = dbContext.Products
                 .Select(p =&gt; new ProductData
                     {
                         Name = p.Name,
                         Description = p.Description,
                         Quantity = p.Quantity,
                         ...
                      })
                  .Localize()
                  .ToList();
</code></pre>
<p>Of course, you need to have the translations tables in the database. Under the hood, the query is rewriten to join with them. You can do this call in your repository implementation as I was suggesting in my older blog post, or you can just call <code>.Localize()</code> from your controllers or services depending on your context.</p>
<p>To make it more usable, the library provides the <code>TranslationForAttribute</code> which can be added on the classes that are mapped through the ORM to the translation tables. In my example this would go on the <code>ProductTrans</code> class that is mapped to <code>Products_Trans</code> table. This means that you don't have to use name based conventions on your tables, because you can use this attribute to tell the library which is the table it needs to join with.</p>
<p>To see more usage examples and how to get started with this you can go to the <a href="https://github.com/iQuarc/DataLocalization/blob/master/README.md?ref=oncodedesign.com">read.me</a> file on the <a href="https://github.com/iQuarc/DataLocalization?ref=oncodedesign.com">Github repository</a>, where <a href="https://github.com/popcatalin81?ref=oncodedesign.com">Catalin</a> has written more details. Also, if you have feedback or questions you can use the <em>Issues</em> on Github and I'm sure Catalin is going to answer or to help you to get stared.</p>
<h5 id="moreaboutimplementinglocalizationisaddressesinmycodedesigntraining">More about implementing localization is addresses in my <a href="https://oncodedesign.com/training-code-design/%22">Code Design Training</a></h5>
<h6 id="featuredimagecrediteyematrixvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_eyematrix?ref=oncodedesign.com">eyematrix via 123RF Stock Photo</a></h6>
<hr>
<p><em>Update, 19.03.2018:</em> <a href="https://danielkvist.net/?ref=oncodedesign.com"><strong>Daniel Kvist</strong></a>, has made a detailed step by step tutorial on how to setup the Data Localization Library in an .NET Core MVC app, in a 4 part blog series:</p>
<ul>
<li><a href="https://danielkvist.net/code/simple-data-localisation-with-net-core-and-iquarc-datalocalization-from-scratch-part-1-set-up-basic-net-core-mvc-project?ref=oncodedesign.com">Simple data localisation with .NET core – Part 1 – Set up basic .NET core MVC project</a></li>
<li><a href="https://danielkvist.net/code/simple-data-localisation-with-net-core-and-iquarc-datalocalization-from-scratch-part-2-add-models-database-and-migrations-with-code-first?ref=oncodedesign.com">Simple data localisation with .NET core – Part 2 – Add models, database and migrations with code first</a></li>
<li><a href="https://danielkvist.net/code/simple-data-localisation-with-net-core-and-iquarc-datalocalization-from-scratch-part-3-add-api-controller-to-view-model-data?ref=oncodedesign.com">Simple data localisation with .NET core – Part 3 – Add API Controller to view model data</a></li>
<li><a href="https://danielkvist.net/code/simple-data-localisation-with-net-core-and-iquarc-datalocalization-from-scratch-part-4-add-data-localisation-for-models-using-iquarc-datalocalization?ref=oncodedesign.com">Simple data localisation with .NET core – Part 4 – Add data localisation for models using iQuarc.DataLocalization</a></li>
</ul>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ The Visitor Pattern - A Better Implementation ]]>
            </title>
            <description>
                <![CDATA[ Even if it has been a while since my last post (I&#39;ve been very busy lately with giving my training), one piece of feedback that I got back then, stuck into my head:





You can make a better implementation of it. I remember somewhere a better implementation of ]]>
            </description>
            <link>https://oncodedesign.com/blog/the-visitor-pattern-a-better-implementation/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b9a</guid>
            <category>
                <![CDATA[ design patterns ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 08 Nov 2016 08:46:51 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/11/TheVisitorPattern-Vikings.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Even if it has been a while since my last post (I've been very busy lately with giving my <a href="https://oncodedesign.com/training">training</a>), one piece of feedback that I got back then, stuck into my head:</p>
<blockquote>
<p>You can make a better implementation of it. I remember somewhere a better implementation of the Visitor Pattern in C#.</p>
</blockquote>
<p>So, last week, having some dead time while flying, I've pulled out my laptop and tried to improve the same <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v3?ref=oncodedesign.com">sample code</a> I did in the <a href="https://oncodedesign.com/the-visitor-pattern">previous post</a>.</p>
<p>In that post we have taken as example a <code>CommandsMananger</code> which holds a structure of commands, and it has to do different operations on these commands, like <code>PrettyPrint()</code> and <code>Approve()</code>.</p>
<p>The implementation we have ended up with has a <code>ReportVisitor</code> which builds a report while each command is visited, and at the end it prints the report.</p>
<pre><code class="language-language-csharp">class ReportVisitor : IVisitor
{
	private StringBuilder report = new StringBuilder();

	public void VisitCustomerCommand(CustomerCommand customerCommand)
	{
		report.AppendLine($&quot;VisitCustomerCommand customer command: {customerCommand.Name} in business: {customerCommand.BusinessDomain}&quot;);
	}

	public void VisitSalesOrderCommand(SalesOrderCommand salesOrderCommand)
	{
		report.AppendLine(&quot;Sales order command: &quot;);
		foreach (var line in salesOrderCommand.OrderLines)
		{
			report.AppendLine($&quot;\t Product={line.Product} Quantity={line.Quantity}&quot;);
		}
	}

	public void VisitPurchaseOrderCommand(PurchaseOrderCommand purchaseOrder)
	{
		report.AppendLine($&quot;Purchase order command: Product={purchaseOrder.Product} Quatity={purchaseOrder.Quantity}&quot;);
	}

	public void Print()
	{
		Console.WriteLine(report);
	}
}
</code></pre>
<p>This is a visitor which holds state (the <code>report</code>), and when each command is visited the state accumulates.</p>
<p>For the <code>Approve()</code> operation we ended up with more visitors. One for each type o the commands. For example the visitor for the <code>PurchaseOrderCommand</code> looked like this:</p>
<pre><code class="language-language-csharp">class PurchaseOrderCommandApprover : IVisitor
{
	public void VisitPurchaseOrderCommand(PurchaseOrderCommand purchaseOrder)
	{
		// code that approves the command of creating a new purchase order.
		// this code may use external classes or services to the approval
	}
	public void VisitCustomerCommand(CustomerCommand customerCommand)
	{	// we do nothing here because we only deal with new purchase orders approval
	}

	public void VisitSalesOrderCommand(SalesOrderCommand salesOrderCommand)
	{	// we do nothing here because we only deal with new purchase orders approval
	}
}
</code></pre>
<p>These are visitors without state. When a command is visited it gets approved or not.</p>
<p>The thing which is not nice about this implementation is that we have to have functions for all the command types even if we don't want to do anything with some commands when they are visited. Above, the <code>PurchaseOrderCommandApprover</code> class is only interested on the <code>PurchaseOrderCommand</code> and it will have no code on the functions <code>VisitCustomerCommand()</code> or <code>VisitCustomerCommand()</code>.</p>
<p>The most common implementation that I've seen to get rid of the unwanted functions is to have an abstract base class (like in the <em>Template Method</em> pattern) for the visitor and then we just override the methods that we want. In our example it would be like this (<a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v4?ref=oncodedesign.com">here in v4</a> is the full code):</p>
<pre><code class="language-language-csharp">abstract class Visitor : IVisitor
{
    public virtual void VisitCustomerCommand(NewCustomerCommand customerCommand)
    {
    }

    public virtual void VisitSalesOrderCommand(NewSalesOrderCommand salesOrderCommand)
    {
    }

    public virtual void VisitPurchaseOrderCommand(NewPurchaseOrderCommand purchaseOrder)
    {
    }
}
</code></pre>
<p>and for a concrete visitor we would have:</p>
<pre><code class="language-language-csharp">class PurchaseOrderCommandApprover : Visitor
{
	public override void VisitPurchaseOrderCommand(NewPurchaseOrderCommand purchaseOrder)
	{
		// code that approves the command of creating a new purchase order.
		// this code may use external classes or services to the approval
	}
}
</code></pre>
<p>Nice! We have small classes for the specific visitors.</p>
<p>This is a simple and a good solution for most of the cases. Especially for those where the types of the items visited do not change. This is the one chosen for the Lambda <a href="https://msdn.microsoft.com/en-us/library/bb882521(v=vs.90).aspx?ref=oncodedesign.com">Expression Tree Visitor</a> in .NET. There, you will inherit from the <code>ExpressionVisitor</code> class and override the function that interests you.</p>
<p>Even if this is simple, it doesn't mean it is the nicest solution. I don't like it the most for two reasons:</p>
<ol>
<li>it uses inheritance, and I have a heavy bias against inheritance</li>
<li>in the case that a new type of command needs to be added we will have to add new functions to the <code>IVisitor</code> interface and to the <code>Visitor</code> class.</li>
</ol>
<p>So for the sake of exercise, lets go forward and see if we can make it better.</p>
<p>There are two directions from where we can get to a better implementation. Both have to do with generics (<a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v5?ref=oncodedesign.com">here in v5</a> is the full code for this version).</p>
<p>One direction is to start from the visitors implementations. Another way to have only the functions that we need in a visitor class is to make it implement a generic interface:</p>
<pre><code class="language-language-csharp">public interface IVisitor&lt;TElement&gt;
{
    void Visit(TElement element);
}
</code></pre>
<p>With this, the visitors get only the functions they want. Even more the resulted objects will have only those functions. The <code>PurchaseOrderCommandApprover</code> is as simple as above:</p>
<pre><code class="language-language-csharp">class PurchaseOrderCommandApprover : IVisitor&lt;NewPurchaseOrderCommand&gt;
{
	public void VisitPurchaseOrderCommand(NewPurchaseOrderCommand purchaseOrder)
	{
		// code that approves the command of creating a new purchase order.
		// this code may use external classes or services to the approval
	}
}
</code></pre>
<p>and the <code>Report</code> visitor which is interested in all the commands will implement the generic interfaces for each command type:</p>
<pre><code class="language-language-csharp">class Report : IVisitor&lt;NewCustomerCommand&gt;,
               IVisitor&lt;NewSalesOrderCommand&gt;,
               IVisitor&lt;NewPurchaseOrderCommand&gt;
{
    readonly StringBuilder report = new StringBuilder();

    public void Print()
    {
        Console.WriteLine(report);
    }

    public void Visit(NewCustomerCommand customerCommand)
    {
        report.AppendLine($&quot;New customer request: {customerCommand.Name} in business: {customerCommand.BusinessDomain}&quot;);
    }

    public void Visit(NewSalesOrderCommand salesOrderCommand)
    {
        report.AppendLine(&quot;Sales order request: &quot;);
        foreach (var line in salesOrderCommand.OrderLines)
        {
            report.AppendLine($&quot;\t - Product={line.Product} Quantity={line.Quantity}&quot;);
        }
    }

    public void Visit(NewPurchaseOrderCommand purchaseOrder)
    {
        report.AppendLine($&quot;Purchase order request: Product={purchaseOrder.Product} Quantity={purchaseOrder.Quantity}&quot;);
    }
}
</code></pre>
<p>The other direction is to start from the <code>IVisitable</code> interface, which on the <code>Accept(IVisitor visitor)</code> function needs a parameter of a non-generic interface. So, the non-generic interface <code>IVisitor</code> is still needed, but it doesn't mean it needs to have one function for each command type. Instead it may have a generic function, like:</p>
<pre><code class="language-language-csharp">public interface IVisitor
{
    void Visit&lt;TElement&gt;(TElement element);
}
</code></pre>
<p>Now, if we look again at the interfaces we've obtain we have the non-generic <code>IVisitor</code> above, the generic <code>IVisitor&lt;TElement&gt;</code> below and the <code>IVisitable</code> below:</p>
<pre><code class="language-language-csharp">public interface IVisitor&lt;TElement&gt;
{
    void Visit(TElement element);
}
</code></pre>
<pre><code class="language-language-csharp">public interface IVisitable
{
     void Accept(IVisitor visitor);
}
</code></pre>
<p>Nice! None of the interfaces have any knowledge of the types of the commands (or more abstract said: the items) which are visited. When a new type of command is needed none of the interfaces, nor its implementation must be changed!</p>
<p>To make all this work one last piece of the puzzle remains: the link between the two <code>IVisitor</code> and <code>IVisitor&lt;TElement&gt;</code> interfaces. This is realized by a general implementation of the <code>IVisitor</code>, which is very simple. It wraps the specific visitor and delegates the visit to it:</p>
<pre><code class="language-language-csharp">sealed class Visitor : IVisitor
{
    private readonly object specificVisitor;

    public Visitor(object specificVisitor)
    {
        this.specificVisitor = specificVisitor;
    }

    public void Visit&lt;TElement&gt;(TElement element)
    {
        IVisitor&lt;TElement&gt; v = visitor as IVisitor&lt;TElement&gt;;
        v?.Visit(element);
    }
}
</code></pre>
<p>So, when a <code>IVisitable</code> command is visited, on the <code>Accept()</code> function the above <code>Visitor.Visit&lt;TElement&gt;(TElement element)</code> is called. It casts the <code>specificVisitor</code> to the generic <code>IVisitor&lt;TElement&gt;</code> and if that succeeds (the specific visitor is interested in this element type), it forwards the visit to it.</p>
<p>The client code wires up the visitors, by wrapping the specific visitor into the general one:</p>
<pre><code class="language-language-csharp">public class CommandsManager
{
    private readonly List&lt;IVisitable&gt; items = new List&lt;IVisitable&gt;();

    public void PrettyPrint()
    {
        Report report = new Report(); // the specific visitor
        IVisitor reportVisitor = new Visitor(report); // constructs the general visitor as a wrapper over the specific one.

        foreach (var item in items)
        {
            item.Accept(reportVisitor);
        }

        report.Print();
    }
....
}
</code></pre>
<p>And voilà! Everything works nicely! (the full code for this implementation is <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v5?ref=oncodedesign.com">here in v5</a>, where you can also run the demo and see how everything works together).</p>
<p>I find this implementation to be nicer than the previous ones. It is very flexible, we can easily adapt it to changes and if we'd use a smart Dependency Injection Container the wire-up of the general visitor and the specific visitor could be more elegant.</p>
<p>A small variation of it is to make the link between the two <code>IVisitor</code> interfaces visible in the interfaces themselves. We can do it by adding the <code>AsVisitor()</code> function like this:</p>
<pre><code class="language-language-csharp">public interface IVisitor&lt;TElement&gt;
{
     IVisitor AsVisitor();

     void Visit(TElement element);
}
</code></pre>
<p>Another advantage of this variation is that it makes the client code simpler, because it does not need to know nor to wire the two <code>IVisitor</code> interfaces:</p>
<pre><code class="language-language-csharp">public class CommandsManager
{
    private readonly List&lt;IVisitable&gt; items = new List&lt;IVisitable&gt;();

    public CommandsManager()
    {
        this.items.AddRange(DemoData.GetItems());
    }

    public void PrettyPrint()
    {
        ReportVisitor reportVisitor = new ReportVisitor(); //we just construct the visitor

        foreach (var item in items)
        {
            item.Accept(reportVisitor.AsVisitor()); // we pass the general visitor
        }

        reportVisitor.Print();
    }
...
}
</code></pre>
<p>On the other hand it makes the visitor implementations a bit more complicated, because now the wiring happens here:</p>
<pre><code class="language-language-csharp">class ReportVisitor :   IVisitor&lt;NewCustomerCommand&gt;,
                        IVisitor&lt;NewSalesOrderCommand&gt;,
                        IVisitor&lt;NewPurchaseOrderCommand&gt;
{
    readonly StringBuilder report = new StringBuilder();
    private readonly IVisitor visitor;

    public ReportVisitor()
    {
        visitor = new Visitor(this);
    }

    public IVisitor AsVisitor()
    {
        return visitor;
    }
...
}
</code></pre>
</br>
<p>To sum it up, we have started from the &quot;by the book&quot; implementation of the Visitor Pattern that we've done in the <a href="https://oncodedesign.com/the-visitor-pattern">previous post</a>, and we've tried to improve it gradually. We have reached to a flexible implementation that uses generics and which <strong>does not hard code the types of the items being visited</strong>. I think this is one of the greatest differences. This opens the possibility to embed a generic Visitor Pattern implementation somewhere in the infrastructure of a project, but... more about this in a future post.</p>
<p>The entire source code with all the versions is available on Github as part of my <a href="https://oncodedesign.com/training-design-patterns">Design Patterns Explained</a> course, structured in a folder for each version:</p>
<ul>
<li><a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v3?ref=oncodedesign.com">v3</a> - is the state where we left off in the previous post. A &quot;by the book&quot; implementation</li>
<li><a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v4?ref=oncodedesign.com">v4</a> - uses a base class to let specific visitors override only the methods they are interested in</li>
<li><a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v5?ref=oncodedesign.com">v5</a> - is a better implementation based on generics</li>
<li><a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor/v6?ref=oncodedesign.com">v6</a> - is a small variation of v5, by adding an explicit link between the <code>IVisitor</code> interfaces</li>
</ul>
<h5 id="manymorepatternsareexplainedwithexamplesinmydesignpatternsexplainedcourse">Many more patterns are explained with examples in my <a href="https://oncodedesign.com/training-design-patterns">Design Patterns Explained</a> course</h5>
<h6 id="featuredimagesourceimdbvikings">Featured image source: <a href="http://www.imdb.com/media/rm2334892288/tt2306299?ref=oncodedesign.com">IMDb - Vikings</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ The Visitor Pattern ]]>
            </title>
            <description>
                <![CDATA[ Represents an operation to be performed on the elements of an object structure.

Visitor lets you define a new operation without changing the classes of the elements on which it operates. ]]>
            </description>
            <link>https://oncodedesign.com/blog/the-visitor-pattern/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b99</guid>
            <category>
                <![CDATA[ design patterns ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 28 Jul 2016 10:36:34 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/07/46585186_m.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <p>I have recently given my <em><a href="https://oncodedesign.com/training-design-patterns">Design Patterns Explained</a></em> training, and it felt like the Visitor Pattern discussion created the most <em>aha moments</em> in the audience. It seems that people have a hard time getting this pattern, so I thought to explain my understanding of it in the post.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Design_Patterns?ref=oncodedesign.com">Gang of Four book</a> says:</p>
<blockquote>
<p>Represents an operation to be performed on the elements of an object structure.</p>
</blockquote>
<blockquote>
<p>Visitor lets you define a new operation without changing the classes of the elements on which it operates.</p>
</blockquote>
<p>Go back and read the last sentence once more. Quite an ambitions desire to add operations without modifying the classes, isn't it?</p>
<p>Lets see what this actually means. I like to explain this pattern using the following example:</p>
<p>Say we have some classes that represent commands:</p>
<pre><code class="language-language-csharp">class PurchaseOrderCommand
{
	public Product Product { get; set;}
	public int Quantity { get; set;}
}

class SalesOrderCommand
{
	public IEnumerable&lt;OrderLine&gt; OrderLines { get; set;}
	public string CustomerCode { get; set; }
	public DateTime Date { get; set; }
}

class CustomerCommand
{
	public string Name { get; set; }
	public string BusinessDomain { get; set; }
}
</code></pre>
<p>and some client code which keeps these in a data structure, a List lets say:</p>
<pre><code class="language-language-csharp">public class CommandsManager // refered as the Client code
{
	List commands = new List();

	// The client class has a structure (a list in this case) of the items (commands).
	// The client knows how to iterate through the structure
	// The client would need to do different operations on the items from the structure when iterating it
}
</code></pre>
<p>(List is one of the simplest data structure we use. Even if the Visitor Pattern addresses the cases when the data structure is complex and difficult to iterate, for the simplicity of this example, I use only a list.)</p>
<p>Now, lets add some operations with these items. Say we want to add:</p>
<ul>
<li><code>PrettyPrint()</code> - a function that will produce a nice report with all the commands which are pending</li>
<li><code>Approve()</code> - a function that will approve each command for execution</li>
<li><code>Save()</code> - a function that will persist all the commands, so we don't loose them when the application is restated</li>
</ul>
<p>These operations belong to different areas of concern. The <code>PrettyPrint()</code> would have presentation concerns, the <code>Approve()</code> would have business logic concerns and the <code>Save()</code> would have data access concerns.</p>
<p>One way to do these operations on all the items we have, is to add these functions to the client code, the <code>CommandsManager</code> class in our example. The result would be:</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/07/VisitorPattern-1.png" alt="" loading="lazy"></p>
<p>and the code would be:</p>
<pre><code class="language-language-csharp">public class CommandsManager
{
	readonly List&lt;object&gt; items = new List&lt;object&gt;();

	public void PrettyPrint()
	{
		foreach (var item in items)
		{
			if (item is PurchaseOrderCommand)
			   Print((PurchaseOrderCommand)item);
			else if (item is SalesOrderCommand)
				Print((SalesOrderCommand)item);
			else if (item is CustomerCommand)
				Print((CustomerCommand)item);
		}
	}

	private void Print(PurchaseOrderCommand item)
	{
		Console.WriteLine($"Purchase order command: Product={item.Product} Quatity={item.Quantity}");
    }

	private void Print(SalesOrderCommand item)
	{
		Console.WriteLine("Sales order command: ");
		foreach (var line in item.OrderLines)
		{
			Console.WriteLine($"\t Product={line.Product} Quantity={line.Quantity}");
		}
	}

	private void Print(CustomerCommand item)
	{
		Console.WriteLine($"New customer command: {item.Name} in business: {item.BusinessDomain}");
	}

	public void ApproveAll()
	{
		foreach (var item in items)
		{
			if (item is PurchaseOrderCommand)
				Approve((PurchaseOrderCommand)item);
			else if (item is SalesOrderCommand)
				Approve((SalesOrderCommand)item);
			else if (item is CustomerCommand)
				Approve((CustomerCommand)item);
		}
	}

	private void Approve(CustomerCommand item)
	{
		// Interact w/ the databse and use external services to process a new purchase order command
	}

	private void Approve(SalesOrderCommand item)
	{
		// Interact w/ the databse and use external services to process a new purchase order command
	}

	private void Approve(PurchaseOrderCommand item)
	{
		// Interact w/ the databse and use external services to process a new purchase order command
	}

	public void Save()
	{
		// This might mix DA concerns w/ UI concerns
	}
}
</code></pre>
<p>This approach is not a good solution in most of the contexts. The separation of concerns is poor and the costs of change will be high. Each time a new type of command will appear we need to change this client class. Even more, changes in the presentation logic may affect the <code>Approve()</code> or the <code>Save()</code> code, because all these are implemented in same class for all our commands.</p>
<p>Another way is to add these operations to each of the command classes. This would look like:<br>
<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/07/VisitorPattern-2.png" alt="" loading="lazy"></p>
<p>and the code like:</p>
<pre><code class="language-language-csharp">public class CommandsManager
{
	readonly List&lt;ICommand&gt; commands = new List&lt;ICommand&gt;();

	public void ApproveAll()
	{
		foreach (var item in commands)
		{
			item.Approve();
		}
	}

	public void PrettyPrint()
	{
		foreach (var item in commands)
		{
			item.PrettyPrint();
		}
	}
}

class PurchaseOrderCommand : ICommand
{
	public void Approve()
	{
		// Interact w/ the databse and use external services to process a new purchase order command
	}

	public void PrettyPrint()
	{
		Console.WriteLine($"Purchase order command: Product={Product} Quatity={Quantity}");
	}

	public Product Product { get; }
	public int Quantity { get;  }
}
...

</code></pre>
<p>Here the code that iterates through the data structure (the list of commands) is isolated from the code that implements the commands. This is an advantage from the previous approach. However, now each time we need to add a new operation we will need to change all the existent command classes to add that new operation. This may induce a high cost of change for two reasons: first, we still have a poor separation of concerns (one command class has presentation code, mixed with business logic code and mixed with data access code); second, changing the interface of these classes will trigger changes in all the other classes that use them.</p>
<p>So, can we come with a better design than these two approaches? Yes, if we apply the Visitor Pattern.</p>
<p>We apply it by evolving our previous designs. We define two interfaces: <code>IVisitable</code> and <code>IVisitor</code>. The class diagram is now like this:</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/07/VisitorPattern-3.png" alt="" loading="lazy"></p>
<p>Lets go through the code as and see how they interact.</p>
<p>The <code>IVisitable</code> interface is implemented by all the items in the data structure (by all the command classes in our example). The operations we wanted are not functions of this interface, because the operations may vary. This interface has only one function that will accept a visitor.</p>
<pre><code class="language-language-csharp">public interface IVisitable
{
	void Accept(IVisitor visitor);
}
</code></pre>
<p>By implementing this, each item from the data structure, will pass itself to the visitor and let the visitor to implement whatever operation it wants on it.</p>
<p>The implementation is trivial. It just calls the appropriate <code>VisitXXX()</code> method of the visitor.</p>
<pre><code class="language-language-csharp">public class PurchaseOrderCommand : IVisitable
{
	public void Accept(IVisitor visitor)
	{
		visitor.VisitPurchaseOrderCommand(this);
	}
	...
}

public class SalesOrderCommand : IVisitable
{
	public void Accept(IVisitor visitor)
	{
		visitor.VisitSalesOrderCommand(this);
	}
	...
}

public class CustomerCommand : IVisitable
{
	public void Accept(IVisitor visitor)
	{
		visitor.VisitCustomerCommand(this);
	}
	...
}
</code></pre>
<p>The client code (<code>CommandsManager</code>) iterates through the data structure, and for each item it calls the <code>Accept(IVisitor)</code> method, passing the visitor that visits that item.</p>
<pre><code class="language-language-csharp">public class CommandsManager
{
	private readonly List&lt;IVisitable&gt; items = new List&lt;IVisitable&gt;();

	public void PrettyPrint()
	{
		ReportVisitor report = new ReportVisitor();
		foreach (var item in items)
		{
			item.Accept(report);
		}

		report.Print();
	}
        ...
}
</code></pre>
<p>Now, the client code does one thing only: it iterates through the data structure. The implementation of any operation on the data structure or on its elements is forwarded to the visitor classes.</p>
<p>The visitors are classes that focus on the behavior only and on one operation only. They implement the operations we want. When we need a new operation we will create a new visitor and so on. For example the <code>ReportVisitor</code> creates a printable report with all the commands. It is used to implement the <code>PrettyPrint()</code> operation.</p>
<pre><code class="language-language-csharp">class ReportVisitor : IVisitor
{
	public void VisitCustomerCommand(CustomerCommand customerCommand)
	{
		report.AppendLine($"VisitCustomerCommand customer command: {customerCommand.Name} in business: {customerCommand.BusinessDomain}");
	}

	public void VisitSalesOrderCommand(SalesOrderCommand salesOrderCommand)
	{
		report.AppendLine("Sales order command: ");
		foreach (var line in salesOrderCommand.OrderLines)
		{
			report.AppendLine($"\t Product={line.Product} Quantity={line.Quantity}");
		}
	}

	public void VisitPurchaseOrderCommand(PurchaseOrderCommand purchaseOrder)
	{
		report.AppendLine($"Purchase order command: Product={purchaseOrder.Product} Quatity={purchaseOrder.Quantity}");
	}

	public void Print()
	{
		Console.WriteLine(report);
	}
}
</code></pre>
<p>We can design the visitors as we want. We may have one visitor that does one operation (the printing) for all the items (commands) as the above, or we can have one visitor for one operation for one type of item, as in the following example.</p>
<pre><code class="language-language-csharp">class PurchaseOrderCommandApprover : IVisitor
{
	public void VisitPurchaseOrderCommand(PurchaseOrderCommand purchaseOrder)
	{
		// code that approves the command of creating a new purchase order.
		// this code may use external classes or services to the approval
	}
	public void VisitCustomerCommand(CustomerCommand customerCommand)
	{	// we do nothing here because we only deal with new purchase orders approval
	}

	public void VisitSalesOrderCommand(SalesOrderCommand salesOrderCommand)
	{	// we do nothing here because we only deal with new purchase orders approval
	}
}

class CustomerCommandApprover : IVisitor
{
	private ICrmService crmService;
	public CustomerCommandApprover(ICrmService crmService)
	{
		this.crmService = crmService;
	}

	public void VisitCustomerCommand(CustomerCommand customerCommand)
	{
		// code that approves the command of creating a new customer.
		// uses the ICrmService to do the approval
	}

	public void VisitSalesOrderCommand(SalesOrderCommand salesOrderCommand)
	{	// we do nothing here because we only deal with new purchase orders approval
	}

	public void VisitPurchaseOrderCommand(PurchaseOrderCommand purchaseOrder)
	{	// we do nothing here because we only deal with new purchase orders approval
	}
}
</code></pre>
<p>The way we design the visitors depends on the operation we implement. For the <code>ReportVisitor</code> we wanted a report with all the types of the commands, and it made sense to have code when any element from the structure was visited. For the <code>Approve()</code> operation is the other way around. Here, because we have a very different logic for approving a purchase order command from approving a new customer command, we need a better separation. We created one visitor class for the approval of each element type. It only has code on the <code>VisitXXX()</code> method that corresponds to the element type it is interested in. These classes may have different dependencies the <code>CustomerCommandApprover</code> needs a the <code>ICrmService</code> and the others will not. Also, they will change and evolve separately, so it makes sense to make this kind of separation.</p>
<p>By applying the Visitor Pattern we have achieved a design with a good separation of concerns. Now the client code that iterates the data structure (<code>CommandManager</code>), is separated from the implementation of the operations. The items (the command classes) only hold their specific data and do not have behavior. The operations are implemented by the visitors and we can easily define and add new of them.</p>
<p>The Visitor Pattern brings most value when applied in the contexts where the data structure is complex (a tree or a graph), the types of items it holds (the nodes) are quite fixed and the operations we want on those items vary a lot. Good examples of these are the <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree?ref=oncodedesign.com">syntax trees</a>, which are source code representation in a tree data structure. Here the programming language grammar is fixed, which means that the nodes in the tree are fixed. After the syntax tree is created by parsing the code, then we can define different visitors which will visit each node and we can add any operation on the code. This is useful for doing static code analyses, generating code, printing code etc.</p>
<p>A good example of Visitor implementation in .NET Framework is the <a href="https://msdn.microsoft.com/en-us/library/system.linq.expressions.expressionvisitor(v=vs.110).aspx?ref=oncodedesign.com"><code>ExpressionVisitor</code></a> class. We can inherit from it to implement different operations on a lambda expression. I have used it many times to parse LINQ queries to alter them before they are sent further.</p>
<p>If we go back to our initial example we can make another important observation: the Visitor Pattern helps us to follow the principle that says <em>"Separate Data from Behavior"</em>. A principle that I've first read in the <a href="https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882?ref=oncodedesign.com">Clean Code</a> book. We have started with our command classes which were only data. Data that represents a command. First we wanted to keep them clean of the behavior and we added the operations in the client code, resulting a <code>switch</code> on the object type. Not an OOP design. Then we moved the behavior into the command classes. This was not separating it from data, and we've seen the drawbacks. In the end by applying the <em>Visitor Pattern</em> we have managed to get to a good separation and to get a reduced cost of change as pursued by the <em>Separate Data from Behavior</em> principle. I won't go in more details about this principle here, even if it would turn into an interesting discussion. I'll do it in a future post.</p>
<p>I believe that the Visitor Pattern is not too complex and if used in the correct context it can lead to a better design, a design that embraces change. I hope that the example that I've described here gives an useful addition to all the other good writings on this pattern. The entire source code, including a running demo is available on Github <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/DesignPatterns/ConsoleDemo/Visitor?ref=oncodedesign.com">here</a> as part of my <a href="https://oncodedesign.com/training-design-patterns">Design Patterns Explained</a> course.</p>
<h5 id="many-more-patterns-are-explained-with-examples-in-my-design-patterns-explained-course">Many more patterns are explained with examples in my <a href="https://oncodedesign.com/training-design-patterns">Design Patterns Explained</a> course</h5>
<h6 id="featured-image-credit-fwsam-via-123rf-stock-photo">Featured image credit: <a href="http://www.123rf.com/profile_FWSAM?ref=oncodedesign.com">FWSAM via 123RF Stock Photo</a></h6>
 ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ My Wordpress to Ghost Journey ]]>
            </title>
            <description>
                <![CDATA[ Migrating my blog from Wordpress.com to a self hosted blog using Ghost, was a lot of work. More then I have anticipated. If I look at my Trello board where I keep track of it, I see that I&#39;ve started to work on this on January the ]]>
            </description>
            <link>https://oncodedesign.com/blog/my-wordpress-to-ghost-journey/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76ba8</guid>
            <category>
                <![CDATA[ ghost ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 05 Jul 2016 08:38:40 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/07/Wordpress-Ghost.PNG" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Migrating my blog from <a href="http://wordpress.com/?ref=oncodedesign.com">Wordpress.com</a> to a self hosted blog using <a href="https://ghost.org/?ref=oncodedesign.com">Ghost</a>, was a lot of work. More then I have anticipated. If I look at my <a href="http://trello.com/?ref=oncodedesign.com">Trello</a> board where I keep track of it, I see that I've started to work on this on January the 20th and I am not yet done. It is up, it looks good, but there are still some items on my TODO list. It takes me that long for three main reasons:</p>
<ul>
<li>it is a lot of work</li>
<li>I work on it only on weekends or at night (I'm going through a very busy period with the <em>real work</em>), and</li>
<li>I'm a very pretentious with the result (I can spend hours on a small detail until it looks / works as I want).</li>
</ul>
<p>But it's worthy! I'm very happy with the result. I like how it looks and how it works and I've achieved everything that <a href="https://oncodedesign.com/a-new-look">I wanted when I've started this</a>. However, there are a few things that I miss from Wordpress (I'll get to them below).</p>
<p>In all of this work I was guided by the experience of others who went through a similar process. I've learned by reading many blogs, like <a href="https://troyhunt.com/?ref=oncodedesign.com">Troy's Hunt</a>, who not only that <a href="https://www.troyhunt.com/creating-blog-for-your-non-techie/?ref=oncodedesign.com">contributed in making my choice of going with Ghost</a>, but he was also migrating his own blog at the same time with me :)</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/florincoros?ref=oncodedesign.com">@florincoros</a> nice one!</p>&mdash; Troy Hunt (@troyhunt) <a href="https://twitter.com/troyhunt/status/722555586756259844?ref=oncodedesign.com">April 19, 2016</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>So, I've decided to take some time and detail how I did it, maybe it'll be useful to other who consider such a migration. Troy did the same <a href="https://www.troyhunt.com/its-a-new-blog/?ref=oncodedesign.com">here</a>, and even if I've learned a lot from him and there are similarities, we had different contexts so there are also differences.</p>
<p>It is going to be a long post, but I have made below a list with all the sections, so you can easily jump up and down to what might be useful for you.</p>
<hr>
<h3 id="sections">Sections</h3>
<ul>
<li><a href="#choosingtheplatform">Choosing the Platform</a></li>
<li><a href="#migratethecontent">Migrate the Content</a></li>
<li><a href="#thetheme">The Theme</a></li>
<li><a href="#hosting">Hosting</a></li>
<li><a href="#codesyntaxhighlighting">Code Syntax Highlighting</a></li>
<li><a href="#redirectfromwordpress">Redirect from Wordpress</a></li>
<li><a href="#https">HTTPS</a></li>
<li><a href="#whatimissfromwordpress">What I Miss from Wordpress</a></li>
<li><a href="#todos">TODOs</a></li>
</ul>
<hr>
<h3 id="choosingtheplatform">Choosing the Platform</h3>
<p>The goal wasn't Ghost. As I said in <a href="https://oncodedesign.com/a-new-look">the post that announced the new look</a> I wanted to get away from a <strong>free</strong> Wordpress.com site, with a free theme and to give the blog a more professional look. The options I had in mind were:</p>
<ul>
<li>build or buy a theme for Wordpress and migrate to a professional Wordpress host provider</li>
<li>Ghost self host or <a href="https://ghost.org/?ref=oncodedesign.com">Ghost Pro</a></li>
<li><a href="https://pages.github.com/?ref=oncodedesign.com">Github Pages</a></li>
</ul>
<p>I was biased against Wordpress, because I had a lot of pain with it. It always felt like it is too complex and rigid for what I wanted. I was still considering it thinking that in a payed service it should be better. My biggest pain with it was with writing the posts. Some of my posts have a lot of code, and pasting the code from Visual Studio and then formatting it too look good in the post was a cumbersome process. I was waisting a lot of time with that.</p>
<p>Ghost was attractive because of the Markdown. I can paste code from Visual Studio in the editor and it keeps the formatting. No more inserting spaces and break lines manually. Plus I can easily mark a code structure using the code inline construct like <code>this</code>.</p>
<p>I've started by reading how other technical bloggers feel about it. I've read the following posts (at least):</p>
<ul>
<li>Michele Bustamante: <a href="http://michelebusta.com/i-love-ghost/?ref=oncodedesign.com">i. love. ghost.</a></li>
<li>Troy Hunt: <a href="https://www.troyhunt.com/creating-blog-for-your-non-techie/?ref=oncodedesign.com">Creating a blog for your non-techie significant other; the path to Ghost</a></li>
<li>Scott Hanselman: <a href="http://www.hanselman.com/blog/HowToInstallTheNodejsGhostBloggingSoftwareOnAzureWebsites.aspx?ref=oncodedesign.com">How to install the nodejs Ghost blogging software on Azure Websites</a></li>
<li>Ryan Hayes: <a href="http://ryanhayes.net/ghost-vs-wordpress-review-and-how-to-migrate/?ref=oncodedesign.com">Ghost VS WordPress (and Why I Migrated Back to WordPress)<br>
</a></li>
</ul>
<p>After reading these I was pretty much convinced. I know I should have given a better chance to Wordpress or Github Pages, make a more informed decision, but... this felt right. Wordpress failed me once and Github Pages feels more focused on building sites for projects, rather then blogs.</p>
<p>Next to make a final call on this I needed to get some info about:</p>
<ul>
<li>what it means to migrate the content from Wordpress to Ghost</li>
<li>how hard is to build / buy / make a theme for Ghost</li>
</ul>
<hr>
<h3 id="migratethecontent">Migrate the Content</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>The posts content is the most valuable thing you want to migrate. This is in fact the point of the migration, otherwise we would talk about a new blog. For me it was also the most time consuming thing.</p>
<p>Before making the final decision of going with Ghost, I've started to search and read about the migration to it. Among others, I've read these:</p>
<ul>
<li>Ghost for Beginners: <a href="https://www.ghostforbeginners.com/how-to-transfer-blog-posts-from-wordpress-to-ghost/?ref=oncodedesign.com">Migrating from WordPress to Ghost</a></li>
<li>All About Ghost: <a href="https://www.allaboutghost.com/migrating-your-wordpress-blog-to-ghost/?ref=oncodedesign.com">Moving your Blog from WordPress To Ghost</a></li>
</ul>
<p>Seams not only doable, but quite easy. Lets see how it went.</p>
<p>I've had 31 posts and a few pages to migrate. Not too many.</p>
<p>I could not just move over the HTML of the old posts in Ghost markdown, because it was an  ugly HTML. It had inline CSS, because in Wordpress I was using a free theme that was not offering much, so I was colouring some text headers in the editor and that resulted in HTML mixed with CSS, like this:</p>
<pre><code class="language-language-html">&lt;span style=&quot;color:#50b4c8;&quot;&gt;Part 1: reviewing few concepts&lt;/span&gt;
</code></pre>
<p>This colour wouldn't look right with the new blog design.</p>
<p>Another blocker was the code snippets. The code would have been unreadable elsewhere than in that particular Wordpress theme.</p>
<p>So I've used the <a href="https://wordpress.org/plugins/ghost/faq/?ref=oncodedesign.com">Wordpress Ghost Plugin</a> to migrate the content as explained in above articles. But it wasn't straight forward. First you cannot install plugins on a free Wordpress.com site. Secondly, the images are not migrated by the plugin.</p>
<p>I've followed the instructions from this <a href="https://www.hughrundle.net/2014/03/02/how-i-moved-from-wordpress-to-ghost-and-what-i-learned-along-the-way/?ref=oncodedesign.com">post</a> by Hugh Rundle, and I have used a self-hosted Wordpress install as an intermediate step, as follows:</p>
<ol>
<li>I've installed a fresh Wordpress on Azure. I found in Azure Marketplace a Wordpress + Mysql machine, published by Docker, which I deployed easily as a standalone VM (any other Wordpress installation will do).</li>
<li>I've exported the content from Wordpress.com using <a href="https://en.support.wordpress.com/export/?ref=oncodedesign.com">this guide</a> and then I've imported it on the installation from Azure.</li>
</ol>
<p>At this point I had my content on the self-hosted Wordpress on Azure. On this one I could install plugins.</p>
<p>The images are still on the Wordpress.com site, so when I open a post from the self-hosted site, the images are downloaded from <code>florincoros.wordpress.com</code>. If now I'd migrate the posts to Ghost the images remain on <code>florincoros.wordpress.com</code>. I didn't like this. I can't download them from Wordpress.com to upload them on Ghost, because Wordpress does not offer such an export and even if it would, all the URLs would need to be changed and I'd be limited to a self-hosted Ghost.</p>
<p>What I did was to use <a href="http://cloudinary.com/?ref=oncodedesign.com">Cloudinay</a>. Cloudinary is an image management solution in the cloud. You can upload your images there and then you can easily use them anywhere with some neat image manipulation features based on the URL format. They also have an useful <a href="https://wordpress.org/plugins/cloudinary-image-management-and-manipulation-in-the-cloud-cdn/?ref=oncodedesign.com">Wordpress plugin</a>.</p>
<p>Having the blog copied on the self-hosted Wordpress on Azure, I did the following steps to migrate to Ghost:</p>
<ol>
<li>Install the <a href="https://wordpress.org/plugins/cloudinary-image-management-and-manipulation-in-the-cloud-cdn/?ref=oncodedesign.com">Cloudinary plugin for Wordpress</a> and upload all my images on Cloudinary. The plugin also modified all the URLs from my posts, and now all posts take the the images from Cloudinary and not from <code>florincoros.wordpress.com</code>.</li>
</ol>
<ul>
<li>Install the <a href="https://wordpress.org/plugins/ghost/faq/?ref=oncodedesign.com">Wordpress Ghost Plugin</a> and use it to export the content in Ghost format</li>
<li>Import the content on my new Ghost installation</li>
</ul>
<p>After this I've had the bulk of my content available on the new blog. There were still a few things to adjust:</p>
<ul>
<li>The code snippets</li>
<li>Wordpress uses <code>[code lang=”csharp”] ... [/code]</code> to mark a code block. This is what I got in the post text :( . It doesn't look like code.</li>
<li>some snippets had poor formatting due to the spaces I've manually added with Wordpress editor</li>
<li>Feature images</li>
<li>Because I had a poor free theme on Wordpress, for each post I have inserted its image as content in the post and also selected the same image as the <em>featured image</em> of the post. Now in Ghost, each post gets the image twice :(</li>
<li>Images URL</li>
<li>Cloudinary plugin did a good job for most of the cases, but there are few images who point to an IP address. I guess somewhere along the import/export plugins there was a bug that caused this (that IP address is the one that Azure has given to my Wordpress VM).</li>
<li>all the URLs were rewritten with HTTP not with HTTPS. I am not sure why. Maybe I've missed a setting in plugin, maybe because they may have been HTTP on the original Wordpress too.</li>
<li>Words in italics</li>
<li>in the Wordpress editor it often happened that when I wanted one word from a phrase to be in italics to also make the following space char in italics. When this is converted to markdown we get a space between the word and the <code>*</code>. The result looks like this: *word in italics **next word, instead of <em>word in italics</em> next word.</li>
</ul>
<p>All of these I had to fix manually by editing each post. Here is the time consuming part. I get a lot of speed by using an smart editor like <a href="https://code.visualstudio.com/?ref=oncodedesign.com">Visual Studio Code</a>, where I can find &amp; replace all <code>[code lang=”csharp”]</code> with ````language-csharp` or I can use the multi-cursor or <a href="https://code.visualstudio.com/Docs/editor/editingevolved?ref=oncodedesign.com">other neat features</a> for fast editing to format the code. However, its still a lot of manual work. Maybe if I had hundreds of posts I would have built a tool that parses the json the Wordpress Ghost Plugin generates, and fix all of the above there before import it in Ghost.</p>
<hr>
<h3 id="thetheme">The Theme</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>I knew from the beginning that I can't build or customise a theme. I don't have web design skills and I'm not good with HTML or CSS. So, even before deciding to go with Ghost, I've asked  my good friends at <a href="http://www.dalimedia.ro/?ref=oncodedesign.com">Dali Media</a> if they have a preference of building a my blog theme for Wordpress or Ghost. Even if they have a lot of experience with building Wordpress sites, I got the feeling that it wasn't all that pleasant. They preferred Ghost event though they didn't know it.</p>
<p>For me the theme was a payed service. I've sit down with them, explained what I'd want and in a few days, after a few iterations, I've got it. I'm very happy with the result, it looks good on the phone, on the tablet or on the desktop, and everything I wanted was doable quite fast. I don't know all the details, but I can say that if you are good with HTML, CSS and JavaScript you can figure out quite fast how Ghost works and how to build or customise a theme.</p>
<p>I know that they had to do some custom work to have the dropdown in the menu I wanted for the <em>Training</em> entry, because Ghost does not support this yet. The drawback for me is that I cannot maintain it from the <code>/ghost/settings/navigation/</code> and I'll need to change the theme files when I want something changed in there. Another thing hardcoded in the theme is the bar I have on the right. This also needs to be maintained from the theme's code and not from the editor. The <em>Recent posts</em> box uses the API Ghost offers, so that is not hardcoded, but if I'd want to take out this box, I'd need to change the template for the right bar. I don't mind these. I will change them rarely and the code is clean. I've put it in git, so I have history and from there I can easily deploy updates to these parts.</p>
<p>The entire work for the theme was only in its folder, so with what I've got from the guys at <a href="http://www.dalimedia.ro/?ref=oncodedesign.com">Dali Media</a> I can go either to a self-host, either to Ghost Pro.</p>
<p>Having the content migration figured out, and a theme for Ghost built the next thing was to decide on hosting.</p>
<hr>
<h3 id="hosting">Hosting</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>From the very beginning, I wanted to go with a SaaS offer. After I've decided to go with Ghost, I was convinced to go with Ghost Pro offer. So the entire work on the migration and on the theme went on this assumption. I was using an installation on Azure, but only for development and testing purposes.</p>
<p>When I've reached the <em>Migrate to GhostPro</em> item in my TODO list, this changed. The price was 29$ / month. I was remembering it to be a lot cheaper when I've first read about it on <a href="https://www.troyhunt.com/creating-blog-for-your-non-techie/?ref=oncodedesign.com">Troy's blog</a>. I've <a href="http://thenextweb.com/insider/2014/09/30/ghost-introduces-new-plans-pricing-ghostpro-hosted-blogging-service/?ref=oncodedesign.com#gref">searched a bit</a> and indeed the price grew from 5$ / month to 8$ / month, then to 10$ / month and now to 29$ / month for the smallest plan. Troy makes in <a href="https://www.troyhunt.com/its-a-new-blog/?ref=oncodedesign.com">his post</a> a very compelling argument in GhostPro favour, but 228$/year seems a bit too much for hosting to me. Adding to this that their price grew more then tree times in about a year...  I decided to wait a bit before I go on Ghost Pro. So, I've turned my Azure installation in the production one, at least for now. The first time I'll feel that the extra work that I need to do to manage the self-hosted installation on Azure is significant, I'll move to GhostPro.</p>
<p>On Azure, I have easily installed it as an App Service on the free plan, using <a href="https://github.com/felixrieseberg/Ghost-Azure?ref=oncodedesign.com#running-locally">this</a> Github repository from Felix Rieseberg. I have also made an upgrade of Ghost to see how complicated that is, before making the final call with the self-hosted installation. I have used <a href="https://github.com/felixrieseberg/Ghost-Updater-Azure?ref=oncodedesign.com">this</a> upgrade tool from the same Felix Rieseberg. Again it worked with a simple button hit. So, it seems that self host is not going to be that difficult. I get the PaaS from the Azure, so I don't need to think about the OS, web server etc. and I just need to upgrade Ghost.</p>
<p>For developing and testing the free plan was more then enough. When I've decided to use it for production I've upgraded to the basic plan, because I wanted a better service. I've used features like AlwaysOn, Custom Domain, etc. This makes sense, money wise, only because I have enough credits in my Azure subscription. A B2 costs 55.21EUR a month, which is a lot more than GhostPro. Maybe the shared plan (D1), which is 8.16EUR a month would also be enough.</p>
<p>To setup my domain I have followed <a href="https://azure.microsoft.com/en-gb/documentation/articles/web-sites-godaddy-custom-domain-name/?ref=oncodedesign.com">this</a> guide. After everything was done, I also had to setup the domain name in the Azure portal in the <em>Application settings</em>, so Ghost knows it.<br>
<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/05/Azure-WebApp-AppSettings.png" alt="" loading="lazy"></p>
<p>With this done, I had my site migrated and working at <code>http://oncodedesign.com</code>. At this point it wasn't HTTPS, yet. I was using <code>https://oncodedesign.azurewebsites.net/ghost</code> to login and access the admin page, but the rest was HTTP. I wanted to get to a version that I could show to the world, before fixing this.</p>
<hr>
<h3 id="codesyntaxhighlighting">Code Syntax Highlighting</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>For code syntax highlighting I have used <a href="http://prismjs.com/?ref=oncodedesign.com">PrismJS</a>. I appreciate the most two key aspects of it:</p>
<ul>
<li><strong>very easy to install</strong> - I've just followed the steps <a href="http://blog.davebalmer.com/adding-syntax-highlighting-to-ghost/?ref=oncodedesign.com">here</a> and I was done in a few minutes</li>
<li><strong>highly extensible</strong> - The CSS is very clean. Even if I'm not good at CSS I could easily change the colours of some C# tokens that did not fit well in my theme design</li>
</ul>
<p>Having the code syntax highlighting done, the theme installed and configured, next I have spent some time to adjust the menu, the content of the pages it links and to fix the content of most of the posts after the  migration. With this done, the first version of my blog on Ghost at <a href="http://oncodedesing.com/?ref=oncodedesign.com">oncodedesign.com</a> was ready to be shown to the world. To make this happen, the next step was to redirect the requests from <code>florincoros.wordpress.com</code> to the new address.</p>
<hr>
<h3 id="redirectfromwordpress">Redirect from Wordpress</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>The hole idea here is that when someone uses a link to a page from my old blog, it works. She should be redirected to that page on the new blog. This is useful for all the cases when links are saved in bookmarks, or referenced in other sites or on twitter, facebook, etc., and most important for SEO. I want all the Google / Bing ranking that my posts and pages made on the old blog to be used by the new one as well. So all the redirects need to be 301.</p>
<p>The first step here is to make the <code>florincoros.wordpress.com/&lt;some_url&gt;</code> to redirect to <code>oncodedesing.com/&lt;some_url&gt;</code>. This is easy to achieve with some money :) Wordpress.com offers this as a service for 13$/year. To configure it I just had to follow the instructions <a href="https://en.support.wordpress.com/site-redirect/?ref=oncodedesign.com">here</a>.</p>
<p>The next step was to rewrite the Wordpress URL into Ghost URL. By default an Wordpress URL is like: <code>florincoros.wordpress.com/2015/05/06/the-post-title</code>. Now this gets redirected to <code>oncodedesign.com/2015/05/06/the-post-title</code> and the request fails because in Ghost this post is at <code>oncodedesign.com/the-post-title</code>. So, to fix it I have followed the instructions <a href="https://davidzych.com/migrating-from-wordpress-to-ghost-301-urls/?ref=oncodedesign.com">from this blog post</a>, only that I've used the <a href="http://www.iis.net/learn/extensions/url-rewrite-module/creating-rewrite-rules-for-the-url-rewrite-module?ref=oncodedesign.com">IIS URL rewrite module</a> and specified the rewrite rules in the <code>web.config</code> of the Azure Web App.</p>
<p>Another thing that I did, here was to redirect the www. subdomain to naked domain, meaning that a www.oncodedesign.com is redirected to oncodedesing.com. <a href="http://ryanhayes.net/redirect-www-non-www-using-web-config/?ref=oncodedesign.com">Here</a> Ryan Hayes gives a good explanation on why this is important and how to do it.</p>
<p>In the end my redirect rules look like this, in the <code>web.config</code>:</p>
<pre><code class="language-language-markup">
&lt;system.webServer&gt;
    ...
    &lt;rewrite&gt;
      &lt;rules&gt;
               
        &lt;rule name=&quot;Redirect to non-www&quot; stopProcessing=&quot;true&quot;&gt;
            &lt;match url=&quot;(.*)&quot;&gt;&lt;/match&gt;
            &lt;conditions&gt;
                &lt;add input=&quot;{HTTP_HOST}&quot; pattern=&quot;^oncodedesign\.com$&quot; negate=&quot;true&quot;&gt;&lt;/add&gt;
            &lt;/conditions&gt;
            &lt;action type=&quot;Redirect&quot; url=&quot;https://oncodedesign.com/{R:1}&quot;&gt;&lt;/action&gt;            
        &lt;/rule&gt;
                             
        &lt;rule name=&quot;Redirect Wordpress posts&quot; stopProcessing=&quot;true&quot;&gt;
            &lt;match url=&quot;\d{4}\/\d{2}\/\d{2}\/(.*)$&quot;&gt;&lt;/match&gt;
            &lt;action type=&quot;Redirect&quot; url=&quot;{R:1}&quot;&gt;&lt;/action&gt;            
        &lt;/rule&gt;    
        
        &lt;rule name=&quot;Redirect Wordpress training pages&quot; stopProcessing=&quot;true&quot;&gt;
            &lt;match url=&quot;training\/(.*)-training&quot;&gt;&lt;/match&gt;
            &lt;action type=&quot;Redirect&quot; url=&quot;training-{R:1}&quot;&gt;&lt;/action&gt;            
        &lt;/rule&gt;                                                
                             
        &lt;rule name=&quot;StaticContent&quot;&gt;
          &lt;action type=&quot;Rewrite&quot; url=&quot;public{REQUEST_URI}&quot;/&gt;
        &lt;/rule&gt;
        &lt;rule name=&quot;DynamicContent&quot;&gt;
          &lt;conditions&gt;
            &lt;add input=&quot;{REQUEST_FILENAME}&quot; matchType=&quot;IsFile&quot; negate=&quot;True&quot;/&gt;
          &lt;/conditions&gt;
          &lt;action type=&quot;Rewrite&quot; url=&quot;index.js&quot;/&gt;
        &lt;/rule&gt;
      &lt;/rules&gt;
    &lt;/rewrite&gt;
  &lt;/system.webServer&gt;

</code></pre>
<p>The rules that I've added are: <code>Redirect to non-www</code>, <code>Redirect Wordpress posts</code> and <code>Redirect Wordpress training pages</code>. It is needed that these rules are before the existent ones (<code>StaticContent</code> and <code>DynamicContent</code>), which are needed by Node / Ghost. It gave me some headaches to figure this out...</p>
<p><a href="http://weblogs.asp.net/jongalloway/a-quick-look-at-the-new-visual-studio-online-quot-monaco-quot-code-editor?ref=oncodedesign.com">Visual Studio Online &quot;Monaco&quot;</a> was of great help to edit the <code>web.config</code> and test the redirects directly on Azure.</p>
<p>If I were on Ghost Pro, I couldn't have written the redirects myself. They also don't yet have an admin console where you could configure them. However, from what I've read, if you email the support with the redirects you want they configure them for you.</p>
<p>With this done, my blog was migrated. From this point on I could publish new posts and add new pages. It was the moment I have announced <a href="https://oncodedesign.com/a-new-look">the new look</a>.</p>
<hr>
<h3 id="https">HTTPS</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>The last thing I did was to set everything to go through HTTPS. For this I rely entirely on <a href="https://www.cloudflare.com/?ref=oncodedesign.com">CloudFare</a>. Troy Hunt describes it very nicely <a href="https://www.troyhunt.com/its-a-new-blog/?ref=oncodedesign.com">here</a> and I have just followed his steps. Indeed it only took a few minutes to setup and leverage all the benefits. Besides the HTTPS, and the potential performance gain, I like the most the <a href="https://support.cloudflare.com/hc/en-us/articles/200170016-What-is-Email-Address-Obfuscation-?ref=oncodedesign.com">email obfuscation</a> feature.</p>
<hr>
<h3 id="whatimissfromwordpress">What I Miss from Wordpress</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>Even if I am very happy with Ghost and I think it was a good call to leave Wordpress. However, there are a few features that I am missing:</p>
<ol>
<li><strong>review functionality</strong></li>
</ol>
<p>I value a lot the reviews I get for my writings, any of my writings: code, articles, documents, e-mails etc. All the posts or pages that I published on this blog are reviewed at least by one person. I am in luck that my girlfriend is also a developer, she reviews most of my posts.</p>
<p>Wordpress has a review functionality which I used for this. It is not much, but you could ask for a review by e-mail, which was convenient. Now, I either make an account on my blog for all people that I want to ask for a review, or... send the article by email.</p>
<ol start="2">
<li><strong>social networks integration</strong></li>
</ol>
<p>I want to tweet and post on Facebook, LinkedIn, Google+ and the others when I publish a new post.</p>
<p>Wordpress has many plugins for social networks, which give this out-of-the-box. Now, I am posting by hand, and I am planning to setup some <a href="https://ifttt.com/recipes?ref=oncodedesign.com">IFFFT recipes</a> to automate it.</p>
<p>I also like to have my tweets on the blog. I had this easily setup on Wordpress. I don't have yet any idea on how to put this on the new blog.</p>
<ol start="3">
<li><strong>comments</strong></li>
</ol>
<p>I love to get feedback or questions on my posts. Ghost does not have this out of the box. I plan to integrate <a href="https://www.ghostforbeginners.com/how-to-enable-comments-on-a-ghost-blog/?ref=oncodedesign.com">Disqus</a> for this. Until then please use my <a href="https://oncodedesign.com/content">contact info</a> when you want to send me feedback, questions, ideas or thoughts.</p>
<hr>
<h3 id="todos">TODOs</h3>
<p><em><a href="#sections">go up</a></em></p>
<p>At this point my new blog is ready. I can easily publish new posts and everything should work well. However, there are still a few items on my Checklist in my <em>Migrate the blog at oncodedesign.com</em> Trello card:<br>
<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/06/trello-checklist.png" alt="" loading="lazy"></p>
<p>None of these are critical. I think most are nice to have. I will try to do them when a find a few hours here and there. However, I would like to spend the little time I have for my blog to write new technical articles, so it may take a while until I will move this card to the <em>Done</em> column.</p>
<hr>
<p>This was pretty much the work that I did to redo my blog. I hope you like the result. I like it for sure, and excepting the manual fix of the content, I also enjoyed working on it and learning new things on the way.</p>
<p>If you find yourself doing such a migration and you get stuck don't hesitate to <a href="https://oncodedesign.com/contact">ask me</a>, I might have gone through the same.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ My Code Design Training Reshaped ]]>
            </title>
            <description>
                <![CDATA[ It is already more than one year and a half since I started to give my Code Design training. I have developed this training as I say here, out of the desire to teach others the way I link the learnings from the best practices books to the code I ]]>
            </description>
            <link>https://oncodedesign.com/blog/my-code-design-training-reshaped/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76bab</guid>
            <category>
                <![CDATA[ training ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 18 Apr 2016 22:52:50 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/43146662_m.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>It is already more than one year and a half since I started to give my <a href="https://florincoros.wordpress.com/training/code-design-training?ref=oncodedesign.com">Code Design training</a>. I have developed this training as I say <a href="https://florincoros.wordpress.com/2014/06/26/the-code-design-training?ref=oncodedesign.com">here</a>, out of the desire to teach others the way I link the learnings from the best practices books to the code I write and how this helps to write a better code in general, a code that is inexpensive to change.</p>
<p>Along these months I've had the opportunity to give the training to various audiences in different companies. There were cases when the audience was with only really experienced programmers, cases with mixed levels of experience and also with only juniors. Mainly they were .NET or Java developers, but there were also C++, ObjectiveC, JavaScript or Android devs among them. The training, even if it was the same material, had a different dynamic depending on the audience and I enjoyed this a lot. My greatest satisfaction is when I see that &quot;Aha&quot; moment. It may be like &quot;<em>Aha, now I finally get how to apply that into my project</em>&quot;, or &quot;<em>Aha, I see now when to apply this technique or to stress on this matter</em>&quot;, or &quot;<em>Ah... an interesting solution, I'll try it out</em>&quot; or &quot;<em>Aha, you've just confirmed that I'm not (the only one) crazy</em>&quot;. To my satisfaction, after each instance of the training I have got very good and positive feedback. This is also reflected by the <a href="https://florincoros.wordpress.com/training/code-design-training?ref=oncodedesign.com#Testimonials">testimonials</a> I collect on the training page.</p>
<p>Now, is the time to act on the feedback I've collected and to try to improve it even more. I have spent some time in the last few weeks to reflect and to work on this. I wanted to target the most common improvement suggestions that I've got. Thinks like: &quot;<em>More code would have been nice</em>&quot;, &quot;<em>More exercises might have helped</em>&quot;, or &quot;<em>More time on the Cross-Cutting concerns rather then on Design Patters</em>&quot;.</p>
<p>I think that most of these came either because there were cases when we tried to cover too much material in the allocated time or because in some cases it wasn't very clear what to expect from a lesson or the other. When there is too much material planned, it always feels like there wasn't enough time for discussing over code or exercising. Also, from a lesson titled &quot;<em>Design Patterns</em>&quot; it is quite clear what to expect, which may not be the same for &quot;<em>Separation of Concerns</em>&quot; or for &quot;<em>From Principles and Patterns to Practices</em>&quot;. Sometimes it is a pity to allocate a full day to something classic like <em>Design Patterns</em> and maybe not enough to the others, when the others may bring more value since are practices that I've applied and collected from the projects I did along the years.</p>
<p>Many of the above can be addressed through a deeper discussion when planning the training for a specific audience, but I think that restructuring the course helps even more, so here is what I did.</p>
<p>I have taken out from the main course a few lessons and spawned new courses for them. This gives more time to the main course and also makes the courses description more focused, more precise, so it is easier to explain what to expect. The main course remains the &quot;<a href="https://oncodedesign.com/training-code-design"><em>Code Design Practices</em></a>&quot; of four days, but now there is much more time for coding and exercising. Now, we could start implementing a demo application from day one, and exercise on it each day as we advance through the material. If we can plan the training over four weeks, one day each week, than we can have a very nice learning experience by giving people time to reflect from one course session to the other.</p>
<p>From the lessons that I've taken out I have now the &quot;<a href="https://oncodedesign.com/training-design-patterns"><em>Design Patterns Explained</em></a>&quot;, the &quot;<a href="https://oncodedesign.com/training-solid-principles"><em>SOLID Principles Insights</em></a>&quot; and the &quot;<a href="https://oncodedesign.com/training-dependency-injection"><em>Managing Dependencies with Dependency Injection</em></a>&quot; courses. They are all for one day, which gives enough time for discussions, debates and exercises. The <em>Dependency Injection</em>, also remains as a lesson in the main course, but because there are teams who maybe can spent only one day and would benefit a lot from understanding and using <em>Dependency Injection</em> and <em>Service Locator</em> I also present it as a stand alone training.</p>
<p>I have updated the outline for all the courses, so if you go <a href="https://oncodedesign.com/training">here</a> you can see more exactly what changes.</p>
<p>In the end I'd like to thank to all who attended my courses and helped me to improve them through the discussions we had, through the suggestions they gave me and through the feedback and <a href="https://oncodedesign.com/training-code-design#Testimonials">testimonials</a> they have submitted. Thank you!</p>
<h6 id="featuredimagecreditolegdudkovia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_olegdudko?ref=oncodedesign.com">olegdudko via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ A New Look! ]]>
            </title>
            <description>
                <![CDATA[ I&#39;ve been quiet for a while. I didn&#39;t publish a new post in a few weeks. It wasn&#39;t because I got lazy or because I ran out of ideas or experiences worth sharing. I was really busy.


As you probably know I&#39;ve ]]>
            </description>
            <link>https://oncodedesign.com/blog/a-new-look/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b94</guid>
            <category>
                <![CDATA[ ghost ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 11 Apr 2016 06:33:00 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/OnCodeDesign_logo4.png" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I've been quiet for a while. I didn't publish a new post in a few weeks. It wasn't because I got lazy or because I ran out of ideas or experiences worth sharing. I was really busy.</p>
<p>As you probably know <a href="https://florincoros.wordpress.com/2015/08/04/ive-quit-my-job-hire-me/?ref=oncodedesign.com">I've quit my job</a> last year and since then I'm working as an independent. It couldn't have been better so far! I've got involved interesting projects, I'm working with smart people and I enjoy it a lot. But... as an independent the workload may vary a LOT, especially at the beginning. So, I've been through some very busy weeks.</p>
<h2 id="theblog">The Blog</h2>
<p>One of the things I've been busy with, was <a href="https://oncodedesign.com/">this</a>: to give my blog a new look. I like writing here my ideas and my experiences. I also find it imperative in our job to always be up to date with the technology and part of this is done through reading blogs. So, I want to continue to give something back by writing about my work and by sharing my ideas here. Beside this, I enjoy it a lot when I hear that some of my posts are mentioned and debated in various places. All these motivate me not only to continue writing, but also to invest in making the overall blog better.</p>
<p>The first thing I wanted to do was to give it a proper name and to move it from a free Wordpress account. I've settled at <em>On Code Design</em> (oncodedesign.com was also available :) ). I wanted a name that sets the theme of the blog. I'll continue to focus on practical ideas rooted in real live projects, about how to structure the code to maximize the chances of success. So I think that the new name fits this theme well. Until now, I've had a lot of influence from building software in the large organizations, and now since I'm very much anchored in the start-ups challenges this may be completed with another perspective. In the future, I'd also like to have guest posts on code design from colleagues or friends whom share same values with me on software quality and good practices, and who can bring different views in this space.</p>
<p>The next thing, after getting rid of the ads that came with the free account, was to give it a more modern look. Because web design isn't my thing I've asked the help of professionals. Together with my very good friends at <a href="http://www.dalimedia.ro/?ref=oncodedesign.com">Dali Media</a> we've came up with this theme, and as it happens when designers get involved, with a logo :).</p>
<p>I've also decided to migrate from <a href="http://www.wordpress.org/?ref=oncodedesign.com">Wordpress</a> to <a href="http://www.ghost.org/?ref=oncodedesign.com">Ghost</a>. I didn't do a research or comparison in depth, but after reading about some other tech bloggers experiences, Ghost seems a good alternative for getting away from Wordpress issues and complexity. At first I wanted to go with <a href="https://ghost.org/?ref=oncodedesign.com">Ghost Pro</a> and take the SaaS advantages, but after I've seen that their price grew from 8$ / month (when I've first read about it on <a href="http://www.troyhunt.com/2015/10/creating-blog-for-your-non-techie.html?ref=oncodedesign.com">Troy's Hunt blog</a>) to <a href="http://www.ghost.org/pricing?ref=oncodedesign.com">26$ / month</a> in just a few months, I've decided to host it myself on an Azure website.</p>
<p>...And, after some weeks of hard work on it, this is the end result. I hope you'll like it! (and that Wordpress redirects from my old blog work well :) ).</p>
<h2 id="mira">MIRA</h2>
<p>Refreshing my blog was't the only thing that kept me busy from writing posts in the last weeks. Another big thing that happened was our (<a href="http://www.iquarc.com/?ref=oncodedesign.com">iQuarc</a>) partnership with <a href="http://www.mirarehab.com/?ref=oncodedesign.com">MIRA</a>. We've worked with the MIRA co-founders for a few months last year, and we've easily concluded that a partnership between MIRA and iQuarc would be highly beneficial for all parties, so it happened. Therefore, in the past weeks we all did a hard work to re-architect and implement their core product preparing it for the cloud and for being sold at a large scale. I think more writings inspired from this product will come in the future, because it has some nice technical challenges and I enjoy a lot working with the guys at MIRA.</p>
<h2 id="training">Training</h2>
<p>Another thing that kept me busy was restructuring my <a href="https://oncodedesign.com/training">Code Design training</a>. I wanted to put in all the good feedback that I've collected along the years. More about this in a new post. (I promise not long from now :) )</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Concurrent Unit Tests with Service Locator ]]>
            </title>
            <description>
                <![CDATA[ My talk at Microsoft Summit created a nice discussion with some of the participants about writing isolated unit tests when using the Service Locator.


It started from the part where I was showing how the AppBoot helps in dependencies management by enforcing consistency on how the Dependency Injection is done. ]]>
            </description>
            <link>https://oncodedesign.com/blog/concurrent-unit-tests-with-service-locator/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b93</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Wed, 25 Nov 2015 10:42:06 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1453830337/concurrent-grab-suitcase.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p><a href="https://florincoros.wordpress.com/2015/11/09/speaker-at-microsoft-summit/?ref=oncodedesign.com">My talk at Microsoft Summit</a> created a nice discussion with some of the participants about writing isolated unit tests when using the <em>Service Locator</em>.</p>
<p>It started from the part where I was showing how the <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">AppBoot</a> helps in dependencies management by enforcing consistency on how the <em>Dependency Injection</em> is done. With the AppBoot we make sure that <em>Dependency Injection</em> is used and that it is done only through the constructor. The question that started the discussion was:</p>
<blockquote>
<p>Does this mean that I will have constructor parameters for everything? Including utility services like <code>ILog</code>? If so, it means that I will pollute the constructors with these details and…. things may get over complicated</p>
</blockquote>
<p>My answer was that for logging or other similar utilities we could make static helpers that make them easier to be called. Such a helper would wrap a <code>ServiceLocator</code>, so we do not make a strong dependency on the logging implementation or library. Something like this:</p>
<pre><code class="language-language-csharp">public static class Logger  
{ 
	public static void Error(string headline, string message)  
	{  
	    ILogSrv log = ServiceLocator.Current.GetInstance&lt;ILogSrv&gt;();  
	    log.WriteTrace(new Trace(headline, message, Severity.Error));  
	}
	public static void Warning(string headline, string message)  
	{  
	    ILogSrv log = ServiceLocator.Current.GetInstance&lt;ILogSrv&gt;();  
	    …  
	}
	public static void Trace(string functionName, string message)  
	{  
	    ILogSrv log = ServiceLocator.Current.GetInstance&lt;ILogSrv&gt;();  
	    …  
	}
	public static void Debug(string message, object[] variables)  
	{   
	    ILogSrv log = ServiceLocator.Current.GetInstance&lt;ILogSrv&gt;();  
	    …  
	}  
}
</code></pre>
<p>This makes my class code to depend on some static functions (<code>Logger.Error()</code>), but that seems a good compromise as long as the underneath things remain as simple as in the above snippet.</p>
<p>Now, if we are to write some unit tests in isolation, we would like to use a stub for the <code>ILogSrv</code> interface, and we can do that by making a setup like this:</p>
<pre><code class="language-language-csharp"> [TestClass]  
 public class UnitTests  
 {  
   private Mock&lt;IServiceLocator&gt; slStub;

   [TestInitialize]  
   public void TestInitialize()  
   {  
     slStub = new Mock&lt;IServiceLocator&gt;();  
     ServiceLocator.SetLocatorProvider(() =&gt; slStub.Object);  
   }

   [TestMethod]  
   public void PlaceNewOrder_FromPriorityCustomer_AddedOnTopOfTheQueue()  
   {  
      Mock&lt;ILogSrv&gt; dummyLog = new Mock&lt;ILogSrv&gt;();
      slStub.Setup(l =&gt; l.GetInstance&lt;ILogSrv&gt;()).Returns(dummyLog.Object); 
      …  
   }  
    …  
 }  
</code></pre>
<p>This code configures the <code>ServiceLocator.Current</code> to return an instance which gives the dummy <code>ILogSrv</code> when needed. Therefore, the production code will use a dummy <code>ILogSrv</code>, which probably does nothing on <code>WriteTrace()</code>.</p>
<p>For logging this may be just fine. It is unlikely that we would need different stub configurations for <code>ILogSrv</code> in different tests. However, things may not be as easy as this, for other services that are taken through the <code>ServiceLocator.Current</code>. We might want different stubs for different test scenarios. Something like this:</p>
<pre><code class="language-language-csharp">// — production code —
public class UnderTest  
{  
  public bool IsOdd()  
  {  
     IService service = ServiceLocator.Current.GetInstance&lt;IService&gt;();  
     int number = service.Foo();  
     return number%2 == 1;  
  }  
}

// — test code —
private Mock&lt;IServiceLocator&gt; slStub = new Mock&lt;IServiceLocator&gt;();  
ServiceLocator.SetLocatorProvider(() =&gt; slStub.Object);

[TestMethod]  
public void IsOdd_ServiceReturns5_True()  
{  
  Mock&lt;IService&gt; stub = new Mock&lt;IService&gt;();  
  stub.Setup(m =&gt; m.Foo()).Returns(5);

  slStub.Setup(sl =&gt; sl.GetInstance&lt;IService&gt;()).Returns(stub);  
  …  
}

[TestMethod]  
public void IsOdd_ServiceReturns4_False()  
{  
  Mock&lt;IService&gt; stub = new Mock&lt;IService&gt;();  
  stub.Setup(m =&gt; m.Foo()).Returns(4);

  slStub.Setup(sl =&gt; sl.GetInstance&lt;IService&gt;()).Returns(stub);  
 …  
}
</code></pre>
<p>Because our production code depends on statics (uses <code>ServiceLocator.Current</code> to take its instance), when these tests are ran in parallel, we will run into troubles. Think of the following scenario: <code>Test1</code> sets up the <code>slStub</code> to return its setup for the <code>IService</code> stub. Then, on a different thread <code>Test2</code> overwrites this setup and runs. After that, when the code exercised by <code>Test1</code> gets the <code>IService</code> instance through the static <code>ServiceLocator.Current</code> it will receive the <code>Test2</code> setup, therefore the surprising failure.</p>
<p>By default <a href="https://msdn.microsoft.com/en-us/library/ms182489.aspx?ref=oncodedesign.com">MS Test</a> or <a href="https://msdn.microsoft.com/en-us/library/jj155800.aspx?ref=oncodedesign.com">VS Test</a> will run tests from different test classes in parallel, so if we have more test classes which do different setups using the <code>ServiceLocator.SetLocatorProvider()</code>, we will run into the nasty situation that <u>sometimes</u> our tests fail on the CI server or on our machine.</p>
<p>So, what should we do?</p>
<p>One option is to avoid the dependencies to the statics and to get the service locator through Dependency Injection through constructor. This would make the above example like below:</p>
<pre><code class="language-language-csharp"> // — production code —

public class UnderTest  
{  
    private IServiceLocator sl;  
    public UnderTest()  
    {  
        sl = ServiceLocator.Current;  
    }

    public UnderTest(IServiceLocator serviceLocator)  
    {  
        this.sl = serviceLocator;  
    }

    public bool IsOdd()  
    {  
        IService service = sl.GetInstance&lt;IService&gt;();  
        int number = service.Foo();  
        return number%2 == 1;  
    }  
 }

// — test code —

[TestMethod]  
 public void IsOdd_ServiceReturns5_True()  
 {  
    Mock&lt;IService&gt; stub = new Mock&lt;IService&gt;();  
    stub.Setup(m =&gt; m.Foo()).Returns(5);

    Mock&lt;IServiceLocator&gt; slStub = new Mock&lt;IServiceLocator&gt;();  
    slStub.Setup(sl =&gt; sl.GetInstance&lt;IService&gt;()).Returns(stub);

    var target = new UnderTest(slStub.Object);  
    …  
 }

[TestMethod]  
 public void IsOdd_ServiceReturns4_False()  
 {  
    Mock&lt;IService&gt; stub = new Mock&lt;IService&gt;();  
    stub.Setup(m =&gt; m.Foo()).Returns(4);
 
    Mock&lt;IServiceLocator&gt; slStub = new Mock&lt;IServiceLocator&gt;();  
    slStub.Setup(sl =&gt; sl.GetInstance&lt;IService&gt;()).Returns(stub);
 
    var target = new UnderTest(slStub.Object);  
    …  
 }  
</code></pre>
<p>This would be a good solution and I favour it in most of the cases. Sometimes I add, as in the above snippet, a constructor without parameters that is used in the production code and one which receives the <code>ServiceLocator</code> as a parameter for my unit tests code.</p>
<p>The other option, which is the answer to the question at the start of the post, looks a bit more magical :). It fits the cases when we need and want to keep the simplicity the static caller brings. Here, we keep the production code as is and we make the unit tests to safely run in parallel. We can do this by creating one stub of the <code>IServiceLocator</code> for each thread and record it on a thread static field. We can do it with a <em>ServiceLocatorDoubleStorage</em> class that wraps the thread static field and gives the tests a clean way to setup and access it.</p>
<pre><code class="language-language-csharp"> public static class ServiceLocatorDoubleStorage  
 {  
  [ThreadStatic]  
  private static IServiceLocator current;

   public static IServiceLocator Current  
   {  
      get { return current; }  
   }
  
   public static void SetInstance(IServiceLocator sl)  
   {  
      current = sl;  
   }
  
   public static void Cleanup()  
   {  
      SetInstance(null);  
   }  
}  
</code></pre>
<p>Now, the unit tests will use the <code>ServiceLocatorDoubleStorage.SetInstance()</code> instead of the <code>ServiceLocator.SetLocatorProvider()</code>. So the test code from the above sample transforms into:</p>
<pre><code class="language-language-csharp"> [TestClass]  
 public class UnitTest  
 {  
   [AssemblyInitialize]  
   public static void AssemblyInit(TestContext context)  
   {    
        // the production code will get it through  
        // ServiceLocator.Current, so this is needed  
        ServiceLocator.SetLocatorProvider(  
            () =&gt; ServiceLocatorDoubleStorage.Current);  
   }
  
   private Mock&lt;IServiceLocator&gt; slStub;
  
   [TestInitialize]  
   public void TestInitialize()  
   {  
        slStub = new Mock&lt;IServiceLocator&gt;();  
        ServiceLocatorDoubleStorage.SetInstance(slStub.Object);  
   }
  
   [TestMethod]  
   public void IsOdd_ServiceReturns5_True()  
   {  
        Mock&lt;IService&gt; stub = new Mock&lt;IService&gt;();  
        stub.Setup(m =&gt; m.Foo()).Returns(5);
        
        slStub.Setup(sl =&gt; sl.GetInstance&lt;IService&gt;()).Returns(stub);  
        …  
   }  
   …  
 }  
</code></pre>
<p>With this, each time a new thread is used by the testing framework, the test on it will first set its own stub of the <code>ServiceLocator</code> and then it will run. This makes that the <code>ServiceLocator</code> stubs even if they are static resources, not to be shared among different tests on different threads. On the code samples from my <a href="https://florincoros.wordpress.com/training/code-design-training/?ref=oncodedesign.com">Code Design Training</a>, on github <a href="https://github.com/iQuarc/Code-Design-Training?ref=oncodedesign.com">here</a>, you can find a fully functional example that shows how this can be used and how it runs in parallel.</p>
<p>To conclude, I would say that <em>Dependency Injection</em> and <em>Service Locator</em> should be used together. I strongly push towards using <em>Dependency Injection</em> in most of the cases because it makes the dependencies clearer and easier to manage, but definitely there are cases where <em>Service Locator</em> is needed or makes more sense. In both cases writing isolated unit tests should be easy and may be a good check of our design and dependencies.</p>
<h5 id="thisisdiscussedandexplainedindetailinmycodedesigntraining">This is discussed and explained in detail in my <a href="https://oncodedesign.com/training-code-design" title="Code Design Training">Code Design Training</a></h5>
<h6 id="featuredimagecreditnochevia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_noche?ref=oncodedesign.com">noche via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Speaker at Microsoft Summit ]]>
            </title>
            <description>
                <![CDATA[ Last week I had the opportunity to speak for the first time at the Microsoft Summit.


It was a nice and pleasant experience. I have talked about how we could achieve a high quality code design by enforcing consistency with the support of an Application Infrastructure in a large and ]]>
            </description>
            <link>https://oncodedesign.com/blog/speaker-at-microsoft-summit/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8f</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 09 Nov 2015 10:50:08 +0200</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1453830342/ms-summit-bucharest.png" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Last week I had the opportunity to speak for the first time at the <a href="https://mssummit.ro/?ref=oncodedesign.com">Microsoft Summit</a>.</span></p>
<p><span class="s1">It was a nice and pleasant experience. I have talked about how we could achieve a high quality code design by enforcing consistency with the support of an Application Infrastructure in a large and complex enterprise system.</span></p>
<p><span class="s1">I think I did one of my best presentations. I placed my story into the context of re-architecting and re-implementing a legacy system using modern techniques and technologies. I have focused my talk on the challenge of building, testing and deploying for On-Premises, but at the same time being able to easily migrate to Azure and leverage the advantages of PaaS. The focus went on implementing a loosely coupled design, so we can replace certain components when migrating to the cloud.</span></p>
<p><span class="s1">At the end I had some very interesting questions, which turned into good suggestions for me too. One was about Dependency Injection vs Service Locator and how would I unit test, when Service Locator is chosen. I will detail this in my next technical blog post. Its a good idea for a technical topic, so thanks!</span></p>
<p><span class="s1">Another question was if I am thinking to make the design practices that I have presented available to a broader audience through other means then my Code Design training, like writing on MSDN for example. Definitely I will put some time and thought into this. Maybe besides articles in MSDN, making my course available on an online platform like <a href="https://pluralsight.com/?ref=oncodedesign.com">Pluralsight</a> would also be an idea worth investing in. Thanks for this suggestion as well!</span></p>
<p><span class="s1">The overall conference was a successful event in my opinion. I’ve liked the mixture of business and technology. I’ve also liked the area with the partners’ stands where you could see demos on how technology can optimise businesses. I had a pleasant surprise to see that our friends from ConSix startup, had a stand presenting their product.</span></p>
<p><span class="s1">All in all it was a nice week. Thank you Microsoft for inviting me! I’m looking forward to the next editions.</span></p>
<p><span class="s1">I have uploaded my slides over <a href="http://www.slideshare.net/FlorinCoros/cloud-ready-design-through-application-software-infrastructure?ref=oncodedesign.com">here</a> if you’d like to take a look before the recordings are made available.</span></p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Crosscutting Concerns ]]>
            </title>
            <description>
                <![CDATA[ The Crosscutting Concerns are the areas in which high-impact mistakes are most often made when designing an application. There are common causes that lead to this and there are common practices that can help to avoid such mistakes. In this post I will talk about the way I usually address ]]>
            </description>
            <link>https://oncodedesign.com/blog/crosscutting-concerns/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8d</guid>
            <category>
                <![CDATA[ abstraction ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 20 Oct 2015 11:42:13 +0300</pubDate>
            <media:content url="https://res.cloudinary.com/oncodedesign/image/upload/v1453830343/woven-fence.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>The Crosscutting Concerns are the areas in which high-impact mistakes are most often made when designing an application. There are common causes that lead to this and there are common practices that can help to avoid such mistakes. In this post I will talk about the way I usually address most of the crosscutting concerns at the start of a project, when usually there are many other more urgent things to think of.</p>
<p>The crosscutting concerns are the features of the design that may apply across all layers. This functionality typically supports operations that affect the entire application. If we are thinking of a layered architecture, usually the components from more layers (<em>Presentation Layer</em>, <em>Business Layer</em> or <em>Data Access Layers</em>) use components that address in a common way a concern, or which assures that a critical <em>non-functional requirement</em> is going to be met.</p>
<p>The crosscutting concerns are very much related to the <em>quality requirements</em> (also named non-functional requirements or <em>quality attributes</em>). In fact the crosscutting concerns should derive from these requirements. They are usually implemented in a separate area of the code and specifically address these requirements. For example:</p>
<ul>
<li><strong>Logging</strong> concern derives from <em>Diagnostics</em>, <em>Monitoring</em> or <em>Reliability</em> requirements</li>
<li><strong>Authentication and Authorization</strong> concern derives from <em>Security</em> requirements</li>
<li><strong>Caching</strong> concern derives from <em>Performance</em> requirements</li>
<li><strong>Error Handling</strong> concern derives from <em>Diagnostics, Availability</em> and <em>Robustness</em> requirements</li>
<li><strong>Localization</strong> concern derives from <em>Multi-language</em> and <em>Globalization</em> requirements</li>
<li><strong>Communication</strong> concern derives from <em>Scalability</em>, <em>Availability</em> or <em>Integration</em> requirements</li>
</ul>
<p>One of the most common mistake with the crosscutting concerns is that we tend to neglect their importance at the start of the project, and we poorly consider them or we don’t do it at all. Then, when we already have a ton of code written, somewhere in the second half of the project, from various reasons these quality requirements become actual again. But now… because of all the code we have, the cost of addressing them in a consistent and robust manner becomes very high. To add consistent and useful logging for example, now, would mean to go back through all the code and all the tested functionality and change it to call a logging function. This may be costly. The same goes for <em>authorization</em>, <em>localization</em> and many others.</p>
<p>The challenge comes from the fact that the crosscutting components are like support code for the components that implement the functional requirements. They should be done before we start developing functionality. But this is often not a good idea. At the start of the project it is good to start implementing the functionalities, so we can get feedback and show progress, not to spend too much time on things that may be postponed.</p>
<p>The key is to address them at the very beginning, but not to implement them. We should just identify them and design only the most important aspects. We should postpone the making of any time consuming decision. I’m not saying make uninformed decisions because of lack of time and change them later, I am saying to design in a way that allows postponing these decisions.</p>
<p>Lets take <em>Logging</em> for example. We can easily define the logging interface, looking at the <em>monitoring</em> and <em>diagnostics</em> requirements and considering the complexity of the application. It will be something like this:</p>
<pre><code class="language-language-csharp">public static class Log  
{  
 	public static void Error(string healine, string message)  
 	{ }
 
 	public static void Warning(string healine, string message)  
 	{ }
 
 	public static void Trace(string functionName, string message)  
 	{ }
 
 	public static void Debug(string message, object[] variables)  
 	{ }  
}
</code></pre>
<p>For the start the functions can do nothing. They can be left unimplemented. Later, we can come back to this and invest time to decide which logging framework should we use. We will think about the configurability of the logging, whether we should log in the database or not, weather we should send some logs by email, whether we should integrate with a monitoring tool and which one, at a later time. Until then, all the code that implements the functional use cases, has a support for logging and if we make a clear and simple logging strategy then we will have meaningful calls to these functions.</p>
<p>Maybe in a few sprints, we’ll come with a simple implementation of this, which will write logs in a text file, to help us in debugging. Then later, we could integrate the framework we know best to write the logs in csv file so we can easily filter and search. By the time we get near the deployment in different test environments, we will know more about the logging the system needs and it will be easier to make a good decisions on how to address this concern’s implementation better. However, in all this time the code we are writing calls the logging interface, so we don’t need to go back and search for meaningful places to insert these calls.</p>
<p>The same practice can be applied for all of the crosscutting concerns. I also show it in a previous post <em><a href="https://florincoros.wordpress.com/2015/07/28/localization-concern/?ref=oncodedesign.com">Localization Concern</a></em> which covers the localization aspects in detail.</p>
<p>So, the idea is that at the start of the project we should abstract each component that addresses a crosscutting concern. We can do this by thinking on how the other layers will interact with it, and define the abstraction from their perspective. The other layers should depend on this abstraction only. We can assure this by hiding (or encapsulating) the implementation of the crosscutting concern.<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/logging.png" alt="" loading="lazy"> By encapsulating the implementation we have the flexibility to change it or  to replace the frameworks we use without a significant impact on the other code. In most of the cases the dependencies will be like in this diagram, where <em>ApplicationComponents</em> do not know about the <em>Log4Net</em> library, which is just an implementation detail.</p>
<p>As a conclusion, following this approach, we can have the interface (abstractions) of the crosscutting concerns components very early in the project and have it called from all the use cases. With this we postpone the real implementation of these concerns and we limit the cost of adding it later. The time to define these abstraction is usually short. It goes from a few hours to a few days depending on the size and complexity of the application, but it always pays off rather than having this tackled for the first time in the second half of the project.</p>
<p></p>
<h5 id="thistopicisdiscussedinmoredetailinmycodedesigntraining">This topic is discussed in more detail in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a></h5>
<h6 id="featuredimagecreditnochevia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_noche?ref=oncodedesign.com">noche via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ DRY vs Coupling ]]>
            </title>
            <description>
                <![CDATA[ While reviewing my previous post another great discussion, which may arise from paying attention to your references, came to my mind: Don’t Repeat Yourself (DRY) vs Coupling. Each time you add a new reference it means that you want to call the code from the other assembly, therefore you ]]>
            </description>
            <link>https://oncodedesign.com/blog/dry-vs-coupling/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8e</guid>
            <category>
                <![CDATA[ abstraction ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Wed, 30 Sep 2015 11:28:25 +0300</pubDate>
            <media:content url="http://res.cloudinary.com/oncodedesign/image/upload/v1453830347/1282145_s_ouiahv.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>While reviewing my <a href="https://florincoros.wordpress.com/2015/09/08/using-resharper-for-assembly-references-diagrams/?ref=oncodedesign.com">previous post</a> another great discussion, which may arise from paying attention to your references, came to my mind: <em>Don’t Repeat Yourself (DRY) vs Coupling</em>. Each time you add a new reference it means that you want to call the code from the other assembly, therefore you are coupling the two assemblies.</p>
<p>We have always been told that we should not tolerate duplication. We should always DRY our code. We have even become obsessed with it and each time we see few lines of code, which resemble we immediately extract them into a routine and have it called from the two places. What we often don’t realize is that we’ve just coupled those two places. DRY comes at a cost, and that cost is coupling.</p>
<p>Loosely coupled designs is another desiderate that we have. Lack of coupling means that the elements of our code are better isolated from each other and from change. In general the looser the coupling is the better the design is, because it is less resistant to change. And, change is certain!</p>
<p>On the other hand, of course that duplication is bad. It is one of the primary enemies of a well-designed system. It usually adds additional work, additional risk and additional unnecessary complexity. It is even more problematic when the code is copied &amp; pasted and then small changes are done on it. The code is not identical, but it has teeny-tiny, hard to observe variations. Like an <em>equals</em> (<code>=</code>) operator changed with a <em>differs</em> (<code>!=</code>) or a <em>greater or equals</em> (<code>&gt;=</code>) changed with a <em>strict greater</em> (<code>&lt;</code>). The code is not identical, and the commonality is not abstracted. In such a context abstracting the commonality in an interface and making all the callers to depend on the abstraction, but not on the implementation, which should be well encapsulated / hidden, is the key to a good design. Here we couple them together, we pay the coupling cost, but with clear benefits.</p>
<p><strong>Don’t tolerate coupling without benefits!</strong> Don’t pay a cost if it doesn’t bring you something back. If by DRYing stuff out you don’t make things simpler but rather more complex, then something may be wrong in your design. In this example if we have created a correct abstraction, which truly represents the commonality then changes of its implementation should not trigger changes into its callers, and changes in the callers will not trigger changes into the implementation.</p>
<p>Coming back to my <a href="https://florincoros.wordpress.com/2015/09/08/using-resharper-for-assembly-references-diagrams/?ref=oncodedesign.com">previous post</a> where I’ve talked about the benefits that come from monitoring our references, which is just another way of managing the dependencies in our code, we can have this discussion on the example there, too.</p>
<p>The application there is well isolated from the frameworks it uses. It has more types of UI. One was in WPF as a desktop app, and one was just a console app. It may have had a web UI as well. Another thing that I’ve emphasised there was that we might have some rules that say which references are allowed and which aren’t. Here is the diagram of the references:</p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830364/dependenciesgraph1_ivsox8.png?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830364/dependenciesgraph1_ivsox8.png?w=640" alt="DependenciesGraph1" loading="lazy"></a></p>
<p>When we are working on the <em>WpfApplication</em> we might find ourselves rewriting some of the code that we’ve already had written in the <em>ConsoleApplication</em>. The first thing that comes to mind would be to reference the assembly and reuse the code. But we can’t. Its against the rules, because we want that the different UIs to be independent. Making references among them would mean that the WPF needs the console UI, or even more strange the web UI would need the desktop UI. So, we are left with two options:</p>
<ol>
<li>duplicate the code in both assemblies</li>
<li>create a new assembly (<em>CommonUI)</em> and put the code there</li>
</ol>
<p>Option 2. reduces duplication, but it still creates coupling. Now all the UI assemblies will reference this common thing, and if it is not well abstracted then we may have indirect dependencies among the UIs. A change in WPF may trigger a change in the common assembly, which will trigger a change in the console or in the web UI. Tricky! We should see if it pays off. If it is just about some helpers it might be better to tolerate the duplication and don’t increase the coupling. On the other hand if it is something that has to be commonly presented in all the UIs of our application, then abstracting and encapsulating it in a common assembly might make our design better.</p>
<p>This is also a good example that we should try to DRY as much as possible in a bounded context, but we should not DRY among different context, because we will couple them together with low benefits.</p>
<p>In the end it is again about making the correct tradeoffs and realising that each time we make a decision we are also paying a cost. <a href="http://dannorth.net/blog/?ref=oncodedesign.com">Dan North</a> puts this very nicely in a talk that I like very much called <em><a href="http://www.infoq.com/presentations/Decisions-Decisions?ref=oncodedesign.com">Decisions, Decisions</a></em></p>
<p></p>
<h5 id="thisisdiscussedinmoredetailinmycodedesigntrainingwhentalkingaboutprogrammingprinciples">This is discussed in more detail in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a>, when talking about programming principles.</h5>
<h6 id="featuredimagecreditbond138via123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_bond138?ref=oncodedesign.com">bond138 via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Using ReSharper for Assembly References Diagrams ]]>
            </title>
            <description>
                <![CDATA[ A few posts back I talked about how we can use the assembly references to enforce consistency and separation of concerns (here and here are the old posts). I argue there that if we derive from the architecture the assemblies of our application, the kind of code (responsibility) each one ]]>
            </description>
            <link>https://oncodedesign.com/blog/using-resharper-for-assembly-references-diagrams/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8c</guid>
            <category>
                <![CDATA[ abstraction ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 08 Sep 2015 10:47:09 +0300</pubDate>
            <media:content url="http://res.cloudinary.com/oncodedesign/image/upload/v1453830351/20574249_s1-e1441691157295_ido69p.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>A few posts back I talked about how we can use the assembly references to enforce consistency and separation of concerns (<a href="https://oncodedesign.com/enforce-consistency-with-assembly-references/">here</a> and <a href="https://oncodedesign.com/dependency-inversion-and-assemblies-references/">here</a> are the old posts). I argue there that if we derive from the architecture the assemblies of our application, the kind of code (responsibility) each one holds and how they can reference each other, then monitoring this can become a useful tool to assure that the critical architectural aspects are followed by the implementation.</p>
<p>In this post I will show how am I verifying the assembly references and the tool that I use. It is usually the first thing I do when I start a review on a new project. I find it an efficient way to spot code design smells, <s>bad</s> questionable design decisions or implementation “shortcuts”, which may hurt really badly the project on the long run.</p>
<p>Let’s dive into details. My favorite tool for this is <a href="https://www.jetbrains.com/resharper/?ref=oncodedesign.com">ReSharper</a>. More exactly the <em><a href="https://www.jetbrains.com/resharper/features/project_level.html?ref=oncodedesign.com">Project Dependency Diagram.</a></em> It can be generated from the <em>ReSharper | Architecure</em> menu. What I like the most about it is that in ReSharper 9, it is augmented with the <em><a href="http://blog.jetbrains.com/dotnet/2015/01/28/type-dependency-diagrams-resharper-9/?ref=oncodedesign.com">Type Dependency Diagram</a></em>, and now you can easily drill down in any reference to see the dependencies among the classes, and even further to the lines of code that make that reference needed. Now, when you see a reference that shouldn’t be there you can easily find the code that created it and reason about it. (In ReSharper 8, I was using the <em>Optimize Refernces</em> view to drill down to the lines of code that made a reference needed.)</p>
<p>I’m not going to explain in detail how the tool works, you can read about on the <a href="http://blog.jetbrains.com/dotnet/2015/01/28/type-dependency-diagrams-resharper-9/?ref=oncodedesign.com">ReSharper blog</a>. Let’s look at an example instead.</p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830364/dependenciesgraph1_ivsox8.png?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830364/dependenciesgraph1_ivsox8.png?w=640" alt="DependenciesGraph1" loading="lazy"></a>Here I have generated the reference diagram for a demo project from my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a> (code available on GitHub <a href="https://github.com/iQuarc/Code-Design-Training/tree/Blog-UsingResharperForAssemblyReferencesDiagrams/AppInfraDemo?ref=oncodedesign.com">here</a>). A few things to notice as architectural principles to follow for this project:</p>
<ul>
<li>The application has more UI clients (a WPF and Console for now). They cannot have dependencies (references) one on the other, because we want them independent.</li>
<li>The application consists of more modules. Each module has its own assemblies and they cannot have dependencies (references) among each other. The modules interact only through the <em>Contracts</em> assembly which has only pure interfaces (service contracts) and DTOs (data contracts). - This is applying the <a href="http://docs.google.com/a/cleancoder.com/viewer?a=v&pid=explorer&chrome=true&srcid=0BwhCYaYDn8EgMjdlMWIzNGUtZTQ0NC00ZjQ5LTkwYzQtZjRhMDRlNTQ3ZGMz&hl=en&ref=oncodedesign.com">DIP</a> (see <a href="https://oncodedesign.com/dependency-inversion-and-assemblies-references/">here</a>) and makes that the implementation of the modules is encapsulated and abstracted through the types in the <em>Contracts</em> assembly.</li>
<li>The UI gest the functionality implemented by the modules only through the contracts. The UI cannot have direct references to the implementation of the modules, neither to the <em>Data Access</em>. This enforces that the application logic does not depend on the UI, but the other way around (again applying DIP).</li>
</ul>
<p>Any new reference, which does not obey the above architectural principles will easily be found when we re-generate the diagram from code.</p>
<p>If we go in more details we see other development patterns.</p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830360/dependenciesgraph2_jeeuwy.png?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830360/dependenciesgraph2_jeeuwy.png?w=640" alt="DependenciesGraph2" loading="lazy"></a></p>
<p>Each module has a <code>.Services</code> assembly which implements or consumes <code>Contracts</code>. The module assemblies may reference and use the <code>DataAccess</code> or the <code>Common</code> assembly from the <code>Infrastructure</code>. These are not necessarily rules as strict as the above, but more like conventions which create a consistency on how a module is structured. The reference diagram can help a lot to see how these evolve.</p>
<p>Look again to the diagrams. Which reference do you think is strange and might be wrong? The <code>Sales</code> module references the <code>DataAccess</code>. This is fine. It needs to use the <code>IRepository</code> / <code>IUnitOfWork</code> to access data. But, one of the <code>Sales </code>module assemblies is referenced back by the <code>DataAccess</code>. This is wrong. We would want that when the implementations of any of the modules changes, the <code>Infrastructure</code> assemblies not to be affected, because if they are then their change may trigger changes into the other modules as well. So, we’ll have a wave of changes which starts from one module and propagates to other modules. If you look to the first diagram, this reference looks like it creates a circular dependency which may be a better smell that something is wrong. If we right-click the reference we can open the <em>Type Dependency Diagram.</em></p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830353/dependenciesgraph33_ypfrl0.png?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830353/dependenciesgraph33_ypfrl0.png?w=640" alt="DependenciesGraph33" loading="lazy"></a></p>
<p>Here we see that the <code>SalesEntities</code>, from the <code>DataAccess</code>, is the class that created this reference. If I hold the cursor on the dependency arrow I get all the classes it depends on. This class is the <em>Entity Framework</em> <code>DbContext</code> for the <code>Sales</code> module. It should not be here, but the <code>DataAccess </code> needed it to new it up. (in fact this is a TODO that I have postponed for a while in my demo project). To fix this ‘wrong’ reference we have to invert the dependency. So we should create an abstraction. We can make a <code>IDbContextFactory</code> interface into the <code>DataAccess</code>, move the <code>SalesEntities</code> into one of the <code>Sales</code> module assemblies and implement there the factory interface to new it up.</p>
<p>This is a good example of how this tool can help us to find code at wrong level of abstraction, by spotting wrong dependencies. <code>SalesEntities</code> is a high level concept class. It describes the domain of the <code>Sales</code> module. It was wrongly placed into a lower level mechanism implementation, that implements data access.</p>
<p>If you can spread this code review practice among your team more benefits will follow. Each time a new reference or a new assembly appears the team will challenge if it fits into the rules and into the reasoning behind those rules. It will get you towards contextual consistency: <em>this is how we build and structure things in this project. It may not apply to other contexts, but it makes sense in ours and we know why.</em> Consistency is the primary mechanism for simplicity at scale. Reviewing dependencies easily and fast, shared idioms and guiding principles help create and sustain consistency. Once you have consistency, <strong>difference is data</strong>! There has to be a good reason why things that break consistency in a particular spot are tolerated.</p>
<p>To conclude, the tool that we use to generate the references diagram is not the most important thing here. I like ReSharper, but you can get almost the same with the architecture tools from Visual Studio Enterprise / Ultimate. What is important is to use a tool that can generate useful dependencies diagrams from the code and to constantly monitor them. The entire team should be doing this. By reviewing these dependencies regularly you make sure that the critical architectural principles and requirements are still met.</p>
<p></p>
<h6 id="featuredimagecreditvskavia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_vska?ref=oncodedesign.com">vska via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ I&#x27;ve quit my job. Hire me! ]]>
            </title>
            <description>
                <![CDATA[ Last Friday marked an important milestone for me – it was my last day working for ISDC. After 10 years with ISDC I have decided to put an end to my job there. I think it is a good moment in my life and in my career to try something else, ]]>
            </description>
            <link>https://oncodedesign.com/blog/ive-quit-my-job-hire-me/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8b</guid>
            <category>
                <![CDATA[ hire ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 04 Aug 2015 10:47:46 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/img_9842_pifa9g.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Last Friday marked an important milestone for me – it was my last day working for <a href="http://www.isdc.eu/?ref=oncodedesign.com">ISDC</a>. After 10 years with ISDC I have decided to put an end to my job there. I think it is a good moment in my life and in my career to try something else, to try to work as an independent programmer / software architect. For those that are interested, I’d like to tell you what I’ve been doing in recent years and give a hint of what will come next.</p>
<p><strong>ISDC</strong></p>
<p>I’ve started in the .NET Department as a junior developer and I’ve exited as a software architect. A long way. I had the opportunity to work with most of the Microsoft technologies, to switch projects and contexts quite often. This helped me a lot in learning and expanding my experience. I have also found in ISDC great people to work with and to learn from. I have found the right models and good mentors. I am grateful for all that. I have also given a lot back. I have pushed for doing things the right way, close to the highest level of the industry standards. I pushed for quality and I had an important contribution in increasing the technical quality delivered by the .NET teams. I remember being characterised as <em>the quality guy that not only talks the talk, but actually walks the walk.</em> I was the one that introduced Unit Testing in .NET teams and worked hard to make it part of the development process and a common practice in all of the teams. I also was one of the key members in some of the most difficult and important projects we had.</p>
<p>In the last years I have focused on starting projects. It starts with envisioning the technical solution that fits the requirements, and continues with working close with the project management to build the right team and to define and implement the strategy that may lead to reaching project goals in budget and time. In the beginning I was also leading the development of the application infrastructure that sets the project on the right path from the technical perspective. I think this defines quite well my software architect role for a project in ISDC. It is not the same as the architect in a product company or as the Enterprise Architect. It focuses on one project. It involves making difficult decisions, making tradeoffs and explaining to all the stakeholders the consequences of their choices. It’s a worth having experience.</p>
<p><strong>The Future</strong></p>
<p>So where to next? I like this graphic from <a href="http://www.amazon.com/gp/product/1118877241?ref=oncodedesign.com"><span class="s2">The Future of Work by Jacob Morgan</a>. It shows nicely how I’ll try to work next. I find myself standing right where the half-grey half-green guy is (though ISDC did some of the things better than this illustrates).</p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830376/the_evolution_of_the_employee16_tdfvmz.jpg?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830376/the_evolution_of_the_employee16_tdfvmz.jpg" alt="The_evolution_of_the_employee16" loading="lazy"></a></p>
<p>As an independent I would like to use my experience to help various teams or companies to start projects on the right track or to get out from a nasty situation related to software development. I think that I can bring a lot of value by coming in, work with the team to set things on a right path, make sure that the team can take it over and be independent of me and then gradually step out. I don’t see myself staying for a very long time in a project. I may stay close, help whenever something new comes up, but if I do my work well the team should do fine without me after a while. Sometimes my involvement may be only to give training or coaching on certain topics or technologies.</p>
<p>Beside my previous experience there are two more things that make me believe I can do this. First is that I’m not entirely new to it. Since 2013 I only work with a part time job. This year I had a three days/week job, so I had two days to work as independent consultant. I’ve already helped some teams in different companies with training, coaching or to start new projects.</p>
<p>The other thing is that I’m not alone. I am part of <a href="http://www.iquarc.com/?ref=oncodedesign.com">iQuarc</a>. This gives me a great deal of assurance. I know I will always find the kind of support, advice or help that I need from my colleagues. At core, we all have the same level of expertise and at the same time we have complementary skills. We share same values, principles and passion. Together we can make a great team and we can successfully respond to many kinds of requests.</p>
<p>So, what will change? Now, I have all the time for this. I’m all in. I’m available for hire. Here’s a brief summary of what I can do for you:</p>
<ul>
<li><em>Custom software development.</em> I love writing code, so I’d also like writing code for you. I can take small or big tasks. It does’t really matter. I can work in a team or by myself, remote or on-site, as necessary. My experience is with C# and related technologies / frameworks.</li>
<li><em>Software architecture.</em> I have experience in designing small and large applications. I can design the entire project, not only the technical part. This may include a complete strategy from requirements to deployment with needed staff for each phase of the project.</li>
<li><em>Reviews.</em> If you need an external party to review your code or your design I’ll be happy to do so. I can do code reviews at different levels, from looking at the big picture, at the way the code is structured and the way dependencies go, towards lower details on how classes and functions are written or tested. When reviewing code or design I can focus on specific quality requirements like security, performance, maintainability, scalability or other that may be of interest for you or I could do a more general check.</li>
<li><em>Training and coaching.</em> In the past years I have developed and given two standard trainings: <em><a href="https://florincoros.wordpress.com/training/code-design-training/?ref=oncodedesign.com">Code Design</a></em> and <em><a href="https://florincoros.wordpress.com/training/unit-testing-training/?ref=oncodedesign.com">Unit Testing</a></em>, in which I address a wide range of subjects about coding. I can visit your company to deliver lectures and workshops. Beside these topics I could easily spin out workshops on others, like estimations or time management, depending on your needs. I am also a strong believer on learning on the job, so I could join your team only to coach you on a specific technique or a specific issue, working on your project. We can also pair while doing so.</li>
<li><em>Development process.</em> Along the years I have experienced different ways of organising software development teams. If you need help with this I can do so. We can see together whether Scrum fits your context or not and how to tweak it. I can also help with Continuous Integration, Continuous Delivery, TFS, Git etc.</li>
<li><em>Round table.</em> Sometimes people simply want to have a meeting with someone to validate certain topics or ideas. I’m happy to visit you for a meeting with you and your team where we can discuss your questions, sketch together on a whiteboard, look at code, etc. in an ad hoc fashion.</li>
</ul>
<p>This list isn’t exhaustive, so if you have other ideas for how you think I may be able to help you, please <a href="https://oncodedesign.com/contact">contact me</a>.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Localization Concern ]]>
            </title>
            <description>
                <![CDATA[ Localization (also known as internationalization) is one of the concerns that is most of the times overlooked when we design an application. We almost never find it through the requirements, and if we do or if we ask about it, we usually postpone thinking about it and we underestimate the ]]>
            </description>
            <link>https://oncodedesign.com/blog/localization-concern/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b90</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 28 Jul 2015 10:42:07 +0300</pubDate>
            <media:content url="http://res.cloudinary.com/oncodedesign/image/upload/v1453830377/37715689_s_ban0vw.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Localization (also known as internationalization) is one of the concerns that is most of the times overlooked when we design an application. We almost never find it through the requirements, and if we do or if we ask about it, we usually postpone thinking about it and we underestimate the effort of adding it later. In this post I will summaries few key aspects which take little time to take into consideration in our design, and they can save a lot of effort on the long term.</p>
<p><strong>Localization Service</strong></p>
<p>One of the first things that I do is to define, what I call the <em>Localization Service</em>. It is nothing more than a simple interface, with one or two simple methods:</p>
<pre><code class="language-language-csharp"> public interface ILocalizationService  
 {  
 	string GetText(string localizationKey);  
 	string Format&lt;T&gt;(T value);  
 }  
</code></pre>
<p>Notice that the functions do not take as input parameters the language or culture code. The implementation will take care of taking them from the current user. The interface stays simple.</p>
<p>At the beginning of the project I don’t do more than this. I just put in a trivial implementation that doesn’t do much and I postpone the rest of the decisions. Now when screens are built we can just call this interface, and later we’ll make a real implementation of it. We already have a big gain: when we’ll build the localization we don’t need to go through all the screens and modify them to translate the texts. The localization service is called from the beginning.</p>
<p>To make it a simple call, we can have a static wrapper:</p>
<pre><code class="language-language-csharp"> public static class Localizer  
 {  
 	public static string GetText(string localizationKey)  
 	{  
 		var s = ServiceLocator.Current.GetInstance&lt;ILocalizationService&gt;();  
 		return s.GetText(key);  
 	}

 	// same for Format&lt;T&gt;(..)  
 }  
</code></pre>
<p>We can decide later if the translations are stored in resource files or in the database or somewhere else. For now we can just pick the quickest implementation (hardcoded in a dictionary maybe) and move on. We can change it later without modifying the existent screens.</p>
<p><strong>Localization Key</strong></p>
<p>The next thing to consider are some conventions for building the localization keys. We are going to have many texts and it will make a big difference to have some consistent and meaningful keys rather than randomly written strings.</p>
<p>To do this I usually try to define some categories for the translated strings. Then for each category we can define conventions of patterns on how we will create the keys. In most of the applications we’ll have something similar with the following:</p>
<ul>
<li>Labels on UI elements - these are specific texts that appear on different screens. Things like buttons, menus, options, labels, etc
<ul>
<li>Pattern:  <code>&lt;EntityName&gt;.&lt;ControllType&gt;.&lt;LabelKey&gt;</code></li>
<li>Example: <code>Person.Button.ChangeAddress</code></li>
</ul>
</li>
<li>Specific messages or text - These are text that are specific to a functionality or a screen
<ul>
<li>Pattern: <code>&lt;MessageType&gt;.&lt;Functionality&gt;.&lt;MessageKey&gt;</code></li>
<li>Example: <code>Message.ManagePersons.ConfirmEditAddress</code></li>
</ul>
</li>
<li>Standard (or generic) labels or messages - these are text that appear o different screens of the applications
<ul>
<li>Pattern: <code>&lt;MessageType&gt;.&lt;MessageKey&gt;</code></li>
<li>Example: <code>ErrorMessage.UnknownError</code></li>
</ul>
</li>
<li>Metadata - These are names of business entities or their properties that need to be displayed. Usually these are column names in list screns or labels in edit screens
<ul>
<li>Pattern: <code>&lt;EntityType&gt;.&lt;Property&gt;</code></li>
<li>Example: <code>Person.Name</code></li>
</ul>
</li>
</ul>
<p>With such categories and conventions in place, we get many benefits in debugging, managing translations and even in translating texts.</p>
<p>If the application screens are built by templates (like all of the list or edit screens are similar and are around one business entity), later, we could go even further and write generic code which builds the localization key based on the type of the screen and the type of the entity, and it could automatically call the localization service. For example in a razor view, we could write a html helper like:</p>
<pre><code class="language-language-csharp">// usage  
 @Html.LabelForEx(m =&gt; m.Subject);

// implementation  
 public static MvcHtmlString LabelForEx&lt;TModel, TValue&gt;(this HtmlHelper&lt;TModel&gt; html, Expression&lt;Func&lt;TModel, TValue&gt;&gt; expression)  
 {  
 	string entityName = ModelMetadata.GetEntityName(expression, html.ViewData);  
 	string propName = ModelMetadata.GetPropName(expression, html.ViewData);
 
 	string localizationKey = $&quot;{entityName}.{propName}&quot;;  
 	string text = Localizer.GetText(localizationKey);
 
 	return html.LabelFor(expression, text);  
 }  
</code></pre>
<p>I think that these two: the <em>Localization Service Interface</em> and the <em>Conventions for the Localization Keys</em> are the aspects that should be addressed by the design at the beginning of the project. Next I will go through other two important aspects of localization: <em>Managing Translations</em> and <em>Data Localization</em>.</p>
<p><strong>Managing Translations</strong></p>
<p>One of the aspects that is usually ignored when designing for localization is the process of entering and managing the translated texts in different languages: translating the application.</p>
<p>This process can be difficult if the one that does the translation does not have the context of the text she is translating. Usually a word by word translation in a table is not working well. It can be even more difficult if she does not get fast feedback of the changes into the application screens. Emailing the translations to the developers and waiting for a new release can be very annoying. This difficult process can be even more costly if it was postponed until the last moment and it happens a few weeks before the release into production, when usually there are many other urgent things.</p>
<p>The conventions for the localization keys can play an important role in this. They could give some context, and if the one that translates the application can upload the translations into the app and get fast feedback is usually good enough. This means that we need to design and implement some functionality to upload and show the translated text, to avoid a painful process.</p>
<p>Another approach that works well is to implement into the application functionality to translate it. For a web app, the one that does the translation will access the application in “translate mode” and when she hovers the mouse on a text a floating div with an input is shown where she can input the translated text. The text is saved into the database and the page reloaded with the translation in it.</p>
<p>Even if this sounds difficult to implement, it is not and for an application that has a large variety in the texts it displays and needs to be translated in many languages, it worths the effort and makes the translation changes easy.</p>
<p><strong>Data Localzation</strong></p>
<p>Data Localization is about keeping some of the properties of the business entities in more languages. Imagine that your e-commerce app gets used in France and it would be better to have a translation for the name and the description of your products. For instance for the name of the <em>mouse</em> product, you will need to store its name in french: <em>souris d’ordinateur</em>.</p>
<p>One of the solution to implement this is to create a translation table for each table that has columns, which should be kept in more languages. This allows us to add more languages over time.</p>
<p><a href="http://res.cloudinary.com/oncodedesign/image/upload/v1453830379/datalocalization_n54rel.png?ref=oncodedesign.com"><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830379/datalocalization_n54rel.png" alt="DataLocalization" loading="lazy"></a></p>
<p>The columns of the <code>Products</code> will keep the data in the default language (or language agnostic) and the <code>Products_Trans</code> table will keep the translations in a specific language. Here we’ll have only the columns that need translations: <code>Name</code> and <code>Description</code>.</p>
<p>If we are to add this functionality later into our project, we need to go back in all the existent screens and change them not to read data from one table (<code>Products</code>), but also to join it with the translations table (<code>Products_Trans</code>).  This may be very costly, because changing tens of screens built months ago may put our project in jeopardy.</p>
<p>The alternative, is to build some generic mechanism that automatically does the join under the scenes, based on some conventions and metadata. If we use Entity Framework, LINQ and we have the data access made though a single central point as I’ve described in <em><a href="https://oncodedesign.com/separating-data-access-concern/">Separating Data Access Concern</a></em> post, then this can be achieved.</p>
<p>We need to rely on some conventions:</p>
<ul>
<li>the translation tables and EF mapped entities have the same name with <code>_Trans</code> suffix</li>
<li>the translated columns have same name with the ones in the main table</li>
<li>some catalog that gives the entity names (tables), for which there is a translation table</li>
</ul>
<p>With this, and by knowing that the all the LINQ queries go through our one <code>Repository</code> and <code>UnitOfWork</code> implementation as described in the <a href="https://oncodedesign.com/separating-data-access-concern/">above post</a>, we intercept the lambda expression of each query, parse it, and recreate it with the join for the translation table.</p>
<p>To implement this we make that all the <em>IQueryable</em> our <em>Repository</em> returns to be a wrapper over the one returned by the EF.</p>
<pre><code class="language-language-csharp"> private IQueryable&lt;T&gt; GetEntitiesInternal&lt;T&gt;(bool localized) where T : class  
 {  
 	DbSet&lt;T&gt; dbSet = context.Set&lt;T&gt;();
 
 	return localized ? new DataLocalizationQueryable&lt;T&gt;(dbSet, this .cultureProvider) : dbSet;  
 }  
</code></pre>
<p>The <code>DataLocalizationQeryable</code> wrapper uses a visitor to go through the lambda expression and for each member assignment  node, from the <code>Select</code> statement, which needs to be translated, gets the value from the related property of the translation entity. Here is a code snippet that gives an idea of how the wrapper is implemented:</p>
<pre><code class="language-language-csharp"> public class DataLocalizationQueryable&lt;T&gt; : IOrderedQueryable&lt;T&gt;  
 {  
 	private IQueryable&lt;T&gt; query;  
 	private ICultureProvider cultureProvider;  
 	private ExpressionVisitor transVisitor;
 
	public DataLocalizationQueryable(IQueryable&lt;T&gt; query, ICultureProvider cultureProvider)  
	 {  
	 	…  
	 	transVisitor = new DataLocalizationVisitor(this.cultureProvider.GetCurrentUICulture());  
	 	this.Provider = new DataLocalizationQueryProvider(query.Provider, this.translationVisitor, 		cultureProvider);  
	 }
 
	public IEnumerator&lt;T&gt; GetEnumerator()  
	 {  
	 	return query.Provider.CreateQuery&lt;T&gt;(  
	 	this.transVisitor.Visit(query.Expression)).GetEnumerator();  
	 }
 
	class DataLocalizationQueryProvider : IQueryProvider  
	 {  
	 	private IQueryProvider efProvider;  
	 	private ExpressionVisitor visitor;  
	 	private readonly ICultureProvider cultureProvider;
	 
	 public IQueryable&lt;TElement&gt; CreateQuery&lt;TElement&gt;(Expression expression)  
	 {  
	 	return new DataLocalizationQueryable&lt;TElement&gt;(  
	 	efProvider.CreateQuery&lt;TElement&gt;(expression), cultureProvider);  
	 }  
 }

class DataLocalizationExpressionVisitor : ExpressionVisitor  
 {  
 	private const string suffix = &quot;_Trans&quot;;  
 	private const string langCodePropName = &quot;LanguageCode&quot;;  
 	private readonly CultureInfo currentCulture;
 
 	public DataLocalizationExpressionVisitor(CultureInfo currentCulture)  
 	{ … }
 
 	protected override MemberAssignment VisitMemberAssignment(MemberAssignment node)  
 	{ … }  
 …  
 }  
</code></pre>
<p>Even if modifying lambda expressions at runtime isn’t a trivial task, we do it only once as an extension to the data access and we avoid going back and modifying tens of screens.</p>
<p>With this, we have covered the most common aspects of localization and we’ve seen that if we pay some thought to it when we design our application we can easily avoid high costs or painful processes on the long run.</p>
<h5 id="thistopicisdiscussedinmoredetailinmycodedesigntraining">This topic is discussed in more detail in my <a href="https://oncodedesign.com/training-code-design/%22">Code Design Training</a></h5>
<h6 id="featuredimagecreditnochevia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_noche?ref=oncodedesign.com">noche via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Dependency Inversion and Assemblies References ]]>
            </title>
            <description>
                <![CDATA[ In my last posts I have talked about using assembly references to preserve critical design aspects. In Enforce Consistency with Assemblies References I talk about how we can use references to outline the allowed dependencies in code and how to use a references diagram to discover code at wrong levels ]]>
            </description>
            <link>https://oncodedesign.com/blog/dependency-inversion-and-assemblies-references/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b92</guid>
            <category>
                <![CDATA[ dependency inversion principle ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 30 Jun 2015 10:45:24 +0300</pubDate>
            <media:content url="http://res.cloudinary.com/oncodedesign/image/upload/v1453830382/31749513_s_qdtpdt.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In my last posts I have talked about using assembly references to preserve critical design aspects. In <a href="https://oncodedesign.com/enforce-consistency-with-assembly-references"><em>Enforce Consistency with Assemblies References</em></a> I talk about how we can use references to outline the allowed dependencies in code and how to use a references diagram to discover code at wrong levels of abstractions. In <a href="https://oncodedesign.com/separating-data-access-concern"><em>Separating Data Access Concern</em></a> I show how we can enforce separation of concerns by using references and I detail this with the data access example. In this post I will talk about the relation between assembly references, in the context of above posts, and the <a href="http://docs.google.com/a/cleancoder.com/viewer?a=v&pid=explorer&chrome=true&srcid=0BwhCYaYDn8EgMjdlMWIzNGUtZTQ0NC00ZjQ5LTkwYzQtZjRhMDRlNTQ3ZGMz&hl=en&ref=oncodedesign.com">Dependency Inversion Principle (DIP)</a>.</p>
<p>When we reference another assembly we take a dependency on it. If assembly <em>A</em> references assembly <em>B</em> it means that <em>A</em> depends on <em>B</em>. Taking this to the data access example, it means that a business logic assembly depends on the data access assembly.</p>
<p><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830391/bl-da1_pkvocg.png" alt="BL-DA" loading="lazy"></p>
<p>This seems to be in contradiction with DIP which says that</p>
<blockquote>
<p><em>High level modules should not depend on low level modules</em></p>
</blockquote>
<p>The business logic is the high level module, and the data access is just details on how we get and store data. The contradiction may be even more clear if we refer to the <em><a href="https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html?ref=oncodedesign.com">The Clean Architecture</a></em> of Uncle Bob, where he points that the application should not depend on frameworks.</p>
<p>Let’s look more closely into DIP, and let’s focus on the <em>INVERSION</em> word. DIP doesn’t only say that we invert the dependency, but more importantly we do it by inverting the ownership of the contract (the interface).</p>
<p><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830388/dip_okgu4z.png" alt="DIP" loading="lazy"></p>
<p>After DIP is applied as above diagram shows, the contract is owned by the high level layer, and no longer by the low layer. The essence in DIP is that the changes on the contract should be driven by the high level modules, not by the low level ones. When the contract ownership is inverted, the dependency is also inverted, because now the low level module depends on the high level one by complying with its contract.</p>
<p>In our example, the business logic assemblies depend on the <code>DataAccess</code> assembly because the <code>IRepository</code> and <code>IUnitOfWork</code> interfaces are placed into the <code>DataAccess</code>. If we would move them into the business logic assemblies, then we would invert the reference. Even more, now we could have more <code>DataAccess</code> assemblies which have different implementations, one  with Entity Framework one with NHibernate and at the application startup we could choose which one to use for that specific deployment or configuration.</p>
<p>However, this is not practical. We may have more business logic assemblies that need to access data, so which one should contain these references? We could make an assembly only with the data access interfaces, to solve it. With this, we would also keep the possibility to have more data access implementations. But do we really need more data access implementations? In most cases we don’t. So, it doesn’t worth to separate them.</p>
<p>Now, coming back to the initial question: if we keep the data access interfaces into the <code>DataAccess</code> assembly and the rest of the assemblies reference it, are we following DIP?</p>
<p>YES, as long as these interfaces change ONLY based on the needs of the business logic modules and not because of implementation details, we follow DIP. From a logical separation point of view they are owned by the business logic layer, and the data access implementation depends on them. For practical reasons they are placed in the same assembly with the implementation, because it doesn’t worth creating one only with the interfaces for this case.</p>
<p><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830386/dip-da_jo2tca.png" alt="DIP-DA" loading="lazy"></p>
<p>As long as the implementation details and specifics do not leak into these interfaces, they represent correct abstractions and the implementation remains well encapsulated.</p>
<p>Following the same reasoning, I some times create a <em>Contracts</em> assembly, which contains the underlying abstractions of the applications. These abstract concepts that are specific to the application and not to one module. They are the truths that do not vary when the details are changed. I may have more functional modules, which have assemblies that implement or consume these contracts.</p>
<p><img src="http://res.cloudinary.com/oncodedesign/image/upload/v1453830384/mdules_ta8n14.png" alt="Mdules" loading="lazy"> This figure shows this, by outlining that the functional modules do not reference one on each other but they all reference the <code>Contracts</code> assembly. If you go deep into DIP description in Uncle’s Bob paper here, you will find this approach very similar with the <em>Button-Lamp</em> example from <em>Finding the Underlying Abstraction</em> section.</p>
<h5 id="thistopicisdiscussedinmoredetailinmycodedesigntrainingwhentalkingaboutsolidprinciples">This topic is discussed in more detail in my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a>, when talking about SOLID Principles.</h5>
<h6 id="featuredimagecreditjacephotovia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_jacephoto?ref=oncodedesign.com">jacephoto via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Enforce Consistency with Assembly References ]]>
            </title>
            <description>
                <![CDATA[ In this post I’ll describe some key aspects that I consider when designing the assemblies that build a system.


When we structure our code into assemblies (generally named binaries, libraries or packages in other platforms than .NET) we are reasoning about three main things:


 * Deployment: different assemblies are deployed ]]>
            </description>
            <link>https://oncodedesign.com/blog/enforce-consistency-with-assembly-references/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b8a</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 09 Jun 2015 10:35:12 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/6055203_s-1.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In this post I’ll describe some key aspects that I consider when designing the assemblies that build a system.</p>
<p>When we structure our code into assemblies (generally named binaries, libraries or packages in other platforms than .NET) we are reasoning about three main things:</p>
<ul>
<li><strong>Deployment</strong>: different assemblies are deployed on different containers. Some assemblies end up on the UI client, some on the application server and some on third party servers we may use;</li>
<li><strong>Separate Concerns</strong>: we put code that addresses similar concerns in one assembly and we separate it from the code that addresses different concerns. This may translate into encapsulate the implementation of one functional area into a module and offer it through an abstract interface to others. It may also translate into separate the data access concern from business logic;</li>
<li><strong>Assure Consistency:</strong> we want that certain things to always be done in the same way through the entire application, so we define an assembly that will be reused by other assemblies around our application</li>
</ul>
<p>Another important aspect of referencing assemblies (or linking binaries) is that we cannot have circular references. This may play an important role in managing dependencies. It may help us to avoid circular dependencies among modules or components if we carefully map them to assemblies.</p>
<p>Each time I design the main structure of a new application I do it with all the above in mind. These, used well can bring huge advantages in managing the growth of the application. Even if it is about a rich desktop UI client, which has in same process the business logic and a local database, so the entire application deploys in one container, I will still have more assemblies because I want to use the other advantages. I want to use assembly references to enforce separation of concerns and to enforce consistency. These are the most important tools to manage the complexity of the application, which is critical in large applications developed by more people or even more teams.</p>
<p>When the initial assembly structure is defined we have: the assemblies (or assembly types) and the rules of how they reference each other. This should satisfy the deployment requirements and it should reflect the concerns that must be separated and the things that must be done consistently. I usually put it in a simple diagram with boxes for assemblies and arrows for allowed references. Where there are no arrows, there cannot be references. This diagram not only that helps to explain and verify the design, but it can also be used when reviewing the implementation. If in code we see references that are not in the diagram it may be a fault in the implementation (encapsulation or abstraction leaking, code at wrong level of abstraction, etc.) or it may be a case which was not handled by the design, so the diagram needs to be adjusted.</p>
<p>Let’s dive into details, by looking at some examples.</p>
<p>Logging, is a cross-cutting concern, which in most of the cases we implement by using a third party library. We look for a library (<em>Log4Net</em> for example) which has a high degree of configurability and it can log in files, in databases or send the traces to a web service. In all the cases where we write a log trace, we want to specify in the same way the type, the criticality, the priority and the message. We want to use <em>Log4Net</em> in the same way everywhere in our app. Consistency is important. When something needs to be changed, we want to be able to do the change following the same recipe in all the places where we log.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/logging-1.png" alt="Logging" loading="lazy"></p>
<p>We can easily enforce this by wrapping the external library in one of our assemblies. Our assembly defines the <em>Log</em> interface which we’ll use in the application. This interface shapes the logging library to our application specific needs. All the configuration and tweaking is done now in one single place: our <em>Logging</em> assembly which implements the <em>Log</em> interface. It is the only one that may reference <em>Log4Net.</em> The rest of the code of the application doesn’t even know that <em>Log4Net</em> is used.</p>
<p>In general any external library gives a very generic API and it is very extendable to many kind of applications. The most applications the library fits, the most successful the library is. When we plug such library in our application we need to tweak it to our specifics and we need to use it in the same way in all the cases.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/external.png" alt="External" loading="lazy"></p>
<p>Even if wrapping it is a very simple solution, it is very powerful. It isolates the change. If something needs to be changed in how the external library is configured, now we don’t need to go through the entire application where it was used. It is directly used in only one place: our wrapper assembly. Even more when we need to replace the external library or to upgrade it to a new version the changes are again isolated in our wrapper. We can isolate in our wrapper all the concerns of communicating with the external library which may include concerns about communication with external systems, security concerns, error handling and so on.</p>
<p>An example for using assembly references to enforce separation of concerns is to separate data access implementation.</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/dataaccess.png" alt="DataAccess" loading="lazy"></p>
<p>In this example the only assembly that can make a connection to the database is the <code>DataAccess</code> assembly. It implements all the data access concerns and offers an abstract interface to above layers. Even more, it does not contain the data model classes, so the business logic (validations or business flows) are kept outside. For more details on how this could be implemented you can refer to my previous post: <em><a href="https://florincoros.wordpress.com/2015/03/31/separating-data-access-concern/?ref=oncodedesign.com">Separating Data Access Concern</a></em>.</p>
<p>In the end, here is a simplified references diagram.<br>
<img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/references.png" alt="" loading="lazy"><br>
Here we can see that we do not have references between the assemblies that implement the business logic, the <em>Functional Modules.</em> They communicate only through abstract interfaces placed into the <em>Contracts</em> assembly. The <em>Contracts</em> assembly contains only interfaces and DTOs. No logic.  With this we make sure that we will not create dependencies between implementation details of the functional modules. The functional modules can access data through the <em>DataAccess</em> assembly, but they cannot go directly to the database. They don’t have any UI logic since they do not have references to UI frameworks assemblies (like <em>System.Web</em> or <em>System.Windows</em>). The UI assembly gets the functionality and data only through the abstract interfaces from <em>Contracts</em> assembly. They can’t do data access otherwise. All are linked through <em>Dependency Injection</em>, which is abstracted by the <em><a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">AppBoot</a></em> assembly.</p>
<p>To conclude, even if you didn’t start with all these in mind when you’ve created the assemblies of your app, I think it worth the effort to draw such a diagram at any moment, because it will show opportunities to bring more order, clarity and better ways to manage the size of your project.</p>
<h5 id="thisdesignapproachisdiscussedindetailinmycodedesigntraining">This design approach is discussed in detail in my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<p></p>
<h6 id="featuredimagecreditaspect3dvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_aspect3d?ref=oncodedesign.com"> aspect3d via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Reflecting on IT Camp 2015 ]]>
            </title>
            <description>
                <![CDATA[ At this time, last week, I was getting ready to get on the stage at the fifth edition of IT Camp. I was starting to feel butterflies in my stomach. Even if it was the third time in a row that I was speaking here I was getting nervous. I ]]>
            </description>
            <link>https://oncodedesign.com/blog/reflecting-on-it-camp-2015/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b85</guid>
            <category>
                <![CDATA[ presentation ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Fri, 29 May 2015 13:06:40 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/itcamp-logo-black-transparent.png" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/references.jpg" alt="IT Camp 2015" loading="lazy"></p>
<p>At this time, last week, I was getting ready to get on the stage at the fifth edition of IT Camp. I was starting to feel butterflies in my stomach. Even if it was the third time in a row that I was speaking here I was getting nervous. I remember a friend telling me, before my talk, that this is a good sign. That it means that I care, even if I am comfortable with the talk and the topic. I guess it is true. I do care a lot about these opportunities, I always prepare them carefully and I do my best to say something meaningful to the audience, to have something that sticks in their mind, to make a small impact.</p>
<p>This year I talked about refactoring. I’ve presented a pattern of refactoring that showed how we can get to lower coupling and to a better separation of concerns by trying to increase the cohesion of our classes. I hope that I have mange to transmit that refactoring is part of software development and we cannot code without refactoring. I would be glad to see now everyone from the audience with their hand up when asked if they refactor, knowing that refactoring is not a way they compensate for their mistakes, is not something that developers like and managers hate, but is part of what we do. It is normal for every good developer. At the end I was asked the question I always get when I talk about good practices: “How do I convince my manager that this is right? that this is something that is needed?” This time I answered that we should try to translate it in no technical terms by using metaphors and I pointed a short video, that I value a lot, where Ward Cunningham presents the <a href="https://www.youtube.com/watch?v=pqeJFYwnkjE&ref=oncodedesign.com">Debt Metaphor</a><em>.</em> An answer maybe inspired by <a href="http://dannorth.net/blog/?ref=oncodedesign.com">Dan North</a> whom, I’ve recently met at <a href="http://craft-conf.com/2015?ref=oncodedesign.com">Craft Conference</a>.</p>
<p>What I love the most about IT Camp is that it manages to create this great learning and experience sharing atmosphere. Even if it is at its fifth edition, the enthusiasm is everywhere. It’s like the holiday we were waiting for. People are eager to comment the sessions, to share the good and the bad things from their work. I always get back to work with higher energy and with revitalized believe that small things matter and that we all can make a difference even when we feel too far from decision making. IT Camp has proven that if you stick to strong principles, and if you can learn from the past editions, you can constantly improve and you can keep high standards once you get there.</p>
<p>If I were to pick only two things that characterized this edition from content perspective I would say security and a track appealing to managers.</p>
<p>Security topics were very present. There were many experts on security among the speakers and it was the subject of many discussions during the breaks or at the beers at day ending. I think that creating awareness on security is much needed for IT industry of Cluj. We need this. There are too many cases where under the delivery pressure we stop thinking about security after we are done with the login screen and we take huge risks for us and for our customers without even knowing. Putting it on the top of the agenda at one of the most relevant conferences in our area helps.</p>
<p>From the business track, which targeted CxOs and managers in general, I hope to get more managers attending developers’ conferences and meetups. I believe that in general, we need to make a better team with management. We need to understand each other and to really work together to make a common strategy that leads us to achieve common goals. We need to close the gap between these worlds. I think that if managers come to developers events and if we get closer to the business, this gap may shrink. And I believe that community events can play an important role in this.</p>
<p>Mihai, Tudy and all <a href="http://www.itcamp.ro/about.html?ref=oncodedesign.com#team">organizing team</a>, THANK YOU! for doing it again and I’m looking forward to the next edition.</p>
<p><em>UPDATE:</em> You can see my slides on <a href="http://www.slideshare.net/FlorinCoros/low-couplighighcohesion-itcamp15?ref=oncodedesign.com">slideshare</a> and the code demo on <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling?ref=oncodedesign.com">github</a>.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Unit Testing on Top of Entity Framework DbContext ]]>
            </title>
            <description>
                <![CDATA[ When writing unit tests one of the challenges is to isolate your tests from everything. To isolate them from the code that is not in their target and also from the other tests. As Roy Osherove puts it in his book “The Art of Unit Testing





[...] a unit test should ]]>
            </description>
            <link>https://oncodedesign.com/blog/unit-testing-on-top-of-entity-framework-dbcontext/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b91</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 12 May 2015 10:49:10 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/27640565_s.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>When writing unit tests one of the challenges is to isolate your tests from everything. To isolate them from the code that is not in their target and also from the other tests. As <a href="http://osherove.com/?ref=oncodedesign.com">Roy Osherove</a> puts it in his book “<a href="http://www.amazon.com/gp/product/1617290890?ref=oncodedesign.com">The Art of Unit Testing</a></p>
<blockquote>
<p>[...] a unit test should always run in its little world, isolated from even the knowledge that other tests there may do similar or different things</p>
</blockquote>
<p>Test isolation makes a key difference between tests suites which are maintainable and bring high value and the ones which are a burden and bring little or no value.</p>
<p>In data intensive applications one of the most common difficulties when writing unit tests is to isolate them from the database. We would like that the tests which verify the business logic, not to hit the database, so we can easily configure test data for different test cases and we can easily assert the result.</p>
<p>One of the best approaches to achieve this, is to implement the <a href="http://martinfowler.com/eaaCatalog/repository.html?ref=oncodedesign.com">Repository Pattern</a> with a good abstraction that gives all the necessary functions to do data access. In my <a href="https://oncodedesign.com/separating-data-access-concern/" title="Separating Data Access Concern">previous posts</a> I present some examples. Having this, in our tests we can use stubs or mocks as implementations of the <em>IRepository</em> interface, and we can test the caller code in isolation. The code snippet below shows such a test which verifies that the <em>where</em> clause filters out data. Similarly, tests which verify that the result is ordered by a certain criteria, or that some business calculations are done correctly when the data is read or saved can easily be done.</p>
<pre><code class="language-language-csharp"> [Test]  
 public void GetOrdersForShipment_AlsoPendingOrders_PendingOrdersFilteredOut()  
 {  
 	Order orderInPending = new Order {Status = OrderStatus.Pending};  
 	Order orderToShip = new Order {Status = OrderStatus.Processed};  
 	IRepository repStub = GetRepStubWith(orderInPending, orderToShip);
 
 	var target = new OrderingService(repStub);
 
 	IQueryable&lt;Order&gt; orders = target.GetOrdersForShipment();
 
 	var expected = new[] {orderInPending};  
 	AssertEx.AreEquivalent(orders, expected);  
}
 
private IRepository GetRepStubWith(params Order[] orders)  
{  
 	Mock&lt;IRepository&gt; repStub = new Mock&lt;IRepository&gt;();  
 	repStub.Setup(r =&gt; r.GetEntities&lt;Order&gt;())  
 			.Returns(orders.AsQueryable());
 
 	return repStub.Object;  
}
</code></pre>
<p><em><a href="https://msdn.microsoft.com/en-us/data/ef.aspx?ref=oncodedesign.com">Entity Framework</a></em> supports testable code designs and the <em>DbContext</em> in itself is a repository implementation. So what would it mean to stub or mock the <em>DbContext</em> directly and write isolated tests in a similar way as we did in the example above? We might need to do this when we don’t wrap the <em>DbContext</em> into another repository implementation or we want to test the code that does the wrapping (as we did in the <code>DataAccess</code> implementation <a href="https://github.com/iQuarc/DataAccess?ref=oncodedesign.com">here</a>).</p>
<p>To get this going the first thing we need, is to make sure that the code we want to test does not use directly our specific context class, but its base class which is <em>DbContext</em>. A factory which will be used by the target code instead of newing up a <em>MyDatabaseContext</em> instance, gets this done. In the test code we will have the factory return a stub or a mock for the context.</p>
<p>Let’s start with a simple test: verify that filtering data at read works. So this simple test would look like this:</p>
<pre><code class="language-language-csharp"> // target (production) code  
 class UsersService  
 {  
 	private DbContext dbContext;
 
 	public User GetUserById(int id)  
 	{  
 		return dbContext.Set&lt;User&gt;().FirstOrDefault(x =&gt; x.Id == id);  
 	}  
 	…  
 }

// unit test code, test method  
 	… 	 
	var contextFactory = GetFactoryWithTestData();  
 	var target = new UsersService(contextFactory);

 	User actual = target.GetUserById(2)

 	User expected = new User {Id = 2};  
 	Assert.AreEqual(expected, actual);
</code></pre>
<p>The test data is not visible here. This is because setting it up requires quite some code, and I’ve put it into the <code>GetFactoryWithTestData()</code> function. First, this function needs to build a stub for <code>DbSet&lt;User&gt;</code>, which contains a few user DTOs among which one has the <code>Id == 2</code>. Second, it has to build and configure a <code>DbContext</code> stub which returns the <code>DbSet</code> stub. In a simplified version the code looks like below:</p>
<pre><code class="language-language-csharp"> …  
 private static DbContext CreateContextWithTestData()  
 {  
 	List&lt;User&gt; users = // newup a list with some users DTOs  
 	  DbSet&lt;User&gt; userSet = GetDbSetStub(users);
 
 	Mock&lt;DbContext&gt; contextStub = new Mock&lt;DbContext&gt;();  
 	contextStub.Setup(x =&gt; x.Set&lt;User&gt;())  
 	.Returns(() =&gt; userSet);
 
 	return contextStub.Object;  
 }

…  
 private static DbSet&lt;T&gt; GetDbSetStub&lt;T&gt;(List&lt;T&gt; values) where T : class  
 {  
 	return new FakeSet&lt;T&gt;(values);  
 }  
 …  
 class FakeSet&lt;T&gt; : DbSet&lt;T&gt;, IQueryable where T : class  
 {  
 	List&lt;T&gt; values;  
 	public FakeSet(IEnumerable&lt;T&gt; values)  
 	{  
 		this.values = values;  
	}
	
 	IQueryProvider IQueryable.Provider  
 	{  
 		get { return values.AsQueryable().Provider; }  
 	}
 
 	Expression IQueryable.Expression  
 	{  
 		get { return values.AsQueryable().Expression; }  
 	}
 
 	Type IQueryable.ElementType  
 	{  
 		get { return values.AsQueryable().ElementType; }  
 	}
 
 	public IList&lt;T&gt; Values  
 	{  
 		get { return values; }  
 	}
 
 	public override T Add(T entity)  
 	{  
 		values.Add(entity);  
 		return entity;  
 	}
 
 	public override T Remove(T entity)  
 	{  
 		values.Remove(entity);  
 		return entity;  
 	}  
 }  
</code></pre>
<p>This works well for testing simple queries. For more complex scenarios, setting up data for one-to-many or many-to-many relations gets quite complex. You could set it once with a model of <em>Users</em> and <em>Roles</em> and use it for more tests, but it is hard to do the same for testing other areas of the application and all the business logic.</p>
<p>Another thing to notice in the above snippet is that we have written the <em>FakeStub</em> class instead of using <a href="https://github.com/Moq/moq4?ref=oncodedesign.com">Moq</a>. This is because we want to keep some state on it (the values) and use it in test cases that involve adding or removing entities from the context.</p>
<p>Until at this point, we were able to stub or mock the <em>DbContext</em> and *DbSet  *because all the methods our code used, were overridable. This allowed our us or Moq to replace their behavior in the tests. However, not all public members (or their dependencies) of <em>DbContext</em> are like this, therefore it gets more difficult isolate the tests for some scenarios.</p>
<p>For example, if we would like to test the code that executes when an entity is read from the database, we would need to be able to raise the <code>ObjectMaterialized</code> event on the stubbed context. This event is not on the <code>DbContext</code>, but on the <code>ObjectContext</code>. The <code>ObjectCotext</code> property is nor public nor overridable, which makes it almost impossible to replace it with a stubbed <code>ObjectContext </code>on which we could trigger the event. To overcome this we can create a <code>DbContext</code> wrapper that just pushes up this event. Like this:</p>
<pre><code class="language-language-csharp"> public sealed class DbContextWrapper : IDbContextWrapper  
 {  
 	private readonly ObjectContext objectContext;
 
 	public DbContextWrapper(DbContext context)  
 	{  
 		Context = context;  
 		objectContext = ((IObjectContextAdapter) context).ObjectContext;  
 		objectContext.ObjectMaterialized += ObjectMaterializedHandler;  
 	}
 
 	private void ObjectMaterializedHandler(object sender, ObjectMaterializedEventArgs e)  
 	{  
 		EntityLoadedEventHandler handler = EntityLoaded;  
 		if (handler != null)  
 			handler(this, new EntityLoadedEventHandlerArgs(e.Entity));  
 	}
 
 	public DbContext Context { get; private set; }
 
 	public event EntityLoadedEventHandler EntityLoaded;
 
 	public void Dispose()  
 	{  
 		objectContext.ObjectMaterialized -= ObjectMaterializedHandler;  
 		Context.Dispose();  
 	}  
 }  
</code></pre>
<p>Now we need to modify all our test code and production code to use the <code>IDbContextWrapper</code> instead of the <code>DbContext</code>. The factory will return a stub for it and the stub can be configured to raise the event.</p>
<p>This is quite inconvenient. Our tests have too much knowledge of implementation details of the production code. Even more, when trying to tests more code that accesses data, things will get more complex and this wrapper will grow creating a hard to manage mess. It is also an example of how the tests may damage the production code design. Maybe by more refactoring this wrapper will lead to the <code>IRepository</code> interface as the abstraction of the repository pattern which hides EF from the production code, but… it seems unlikely and a very long an painful route.</p>
<p>All these point to the conclusion that for testing in isolation the business logic code, abstracting the data access in with some clear interfaces and testing from there up, is a better approach. Implementing the repository pattern on top of EF, not only gives better separation of concerns and may enforce consistency, but is also helps in testing.</p>
<p>However, testing on the <code>DbContext</code> directly may be useful when it is wanted to write some tests on the repository implementation that wraps the EF, and we want these tests to be isolated from the database and from the caller code. Such implementation is available on <a href="http://github.com/iQuarc?ref=oncodedesign.com">iQuarc github account</a>, in the <a href="http://github.com/iQuarc/DataAccess?ref=oncodedesign.com"><code>DataAccess</code> repository</a>.</p>
<h5 id="morediscussionsaboutwritinggoodunittetsarepartofmyunittestingtraining">More discussions about writing good unit tets are part of my <a href="https://oncodedesign.com/training-unit-testing/">Unit Testing Training</a></h5>
<h6 id="featuredimagecreditgeargodzvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_geargodz?ref=oncodedesign.com"> geargodz via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Repository Implementations ]]>
            </title>
            <description>
                <![CDATA[ In my previous post I have presented a way to separate your data access from the business logic, when a relational database is used. I have shown another implementation of the well-known Repository pattern. Since Martin Fowler described it in his book Patterns of Enterprise Application Architecture it became one ]]>
            </description>
            <link>https://oncodedesign.com/blog/repository-implementations/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b87</guid>
            <category>
                <![CDATA[ abstraction ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 21 Apr 2015 10:53:44 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/03/13910339_s.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In my previous post I have presented a way to separate your data access from the business logic, when a relational database is used. I have shown another implementation of the well-known <a href="http://martinfowler.com/eaaCatalog/repository.html?ref=oncodedesign.com">Repository pattern</a>. Since <a href="http://martinfowler.com/?ref=oncodedesign.com">Martin Fowler</a> described it in his book <a href="http://martinfowler.com/books/eaa.html?ref=oncodedesign.com">Patterns of Enterprise Application Architecture</a> it became one of the most implemented pattern in enterprise applications. As it happens with all the design patterns, design principles or any other widely known best practice, we find them in many different implementations, mixed in different ways, but they all have identical things, because they all implement the same solution to a common problem. They all have classes and interfaces named the same. Things like: <code>Repository</code>, <code>UnitOfWork</code>, <code>IEntity</code>, <code>BaseEntity</code>, <code>ConcurencyException</code> and so on.</p>
<p>I like the quote of <a href="http://en.wikipedia.org/wiki/Christopher_Alexander?ref=oncodedesign.com">Christopher Alexander</a> in <a href="http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612?ref=oncodedesign.com">GoF</a> (originally in his book <a href="http://www.amazon.com/Pattern-Language-Buildings-Construction-Environmental/dp/0195019199?ref=oncodedesign.com">A Pattern Language</a>):</p>
<blockquote>
<p>Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice</p>
</blockquote>
<p>In most of the projects I’ve been involved in, I’ve seen the Repository pattern implemented differently. Tweaked to the context of that project, to the specifics of that team, according with their own style. All different implementations of the same patterns and practices.</p>
<p>Out of the curiosity of how many other blog posts are out there, which show similar implementation with the one I’ve presented, I have googled: “<em>generic repository implementation</em>”. As expected there are quite a few, and as expected they are very similar with mine and with each other. I have randomly picked a few (the first 5 returned by my google search at the time of writing this article) that I will comment in this post.</p>
<p><strong>Blog Post: <a href="http://blog.falafel.com/implement-step-step-generic-repository-pattern-c/?ref=oncodedesign.com">Implement Step-by-Step Generic Repository Pattern in C#</a></strong></p>
<p>I like the idea of defining an <code>IEntity</code> interface which should be implemented by all the entities that your repository can persist. This opens the possibility of writing a more generic code into a default implementation of <code>IRepository&lt;T&gt;</code>, (a <code>Repository&lt;T&gt;</code> class), where you could push all the common code like a generic implementation of <code>.FindById(int Id)</code>. This will only work if you can enforce the convention that all the tables in your database have a surrogate primary key of type integer.</p>
<p>I appreciate a lot the discussions in the comments of the post. I am closer to the point of view of <em>ardalis</em> regarding the use of such abstraction on top of EF. However, I don’t see <code>IQueriable&lt;T&gt;</code> as a leaky abstraction. In my view it abstracts a query, which may be executed on different query providers, EF being one of them. It also helps a lot when Unit Testing, because I can stub the <code>IRepository</code> to return <code>array.AsQueriable()</code>.</p>
<p><strong>Blog Post: <a href="http://techbrij.com/generic-repository-unit-of-work-entity-framework-unit-testing-asp-net-mvc?ref=oncodedesign.com">Generic Repository and Unit of Work Pattern, Entity Framework, Unit Testing, Autofac IoC Container and ASP.NET MVC [Part 1]</a></strong></p>
<p>The implementation in this blog post also has the <code>IEntity</code>, but with a slight variation. It is generic by the type of the primary key. With this, we can have different types for the primary key, but it increases the complexity of writing generic code in a common repository implementation.</p>
<p>I like the <code>IAuditableEntity</code>, which should be implemented by all entities on which we want to store information like <code>CreatedBy</code>, <code>CreatedDate</code> etc. With this, in a generic implementation of the repository before we save an entity we can check if it implements <code>IAuditableEntity </code>and if yes, then we can set the correct values in <code>CreatedBy</code>, <code>CreatedData</code> properties and then continue with the save.</p>
<p>The implementation of the <code>UnitOfWork</code> looks a bit strange. It has a <code>Commit()</code> function, which should be called when the client wants to persist a set of changes. What I don’t like is that the client service will need to use one instance of a <code>Repository</code>, another instance of an <code>UnitOfWork</code> and these instances should be built in such way that they wrap the same <code>DbContext</code> instance. The way objects are created (Dependency Injection, Service Locator or custom factories), needs to take care of this, and all the client services need to be aware of it and use it consistently. Things may get too complex when a client service will need to use repositories of two different entities. It will have to deal with four dependencies only for this.</p>
<p>This implementation goes even further and defines a generic service for maintaining business entities. It is called <code>EntityService</code>. A suggestive name I’ve also used in different projects, for a similar generic service with the same scope. These set of services should use the repository (data access), encapsulate the logic around maintaining a business entity (business logic) and give to the controller (UI) functions for the basic CRUD operations.</p>
<p><strong>Blog Post: <a href="http://www.codeproject.com/Articles/770156/Understanding-Repository-and-Unit-of-Work-Pattern?ref=oncodedesign.com">Understanding Repository and Unit of Work Pattern and Implementing Generic Repository in ASP.NET MVC using Entity Framework</a></strong></p>
<p>This blog post addresses the <code>UnitOfWork</code> implementation in a better way than the previous two. Here, the <code>UnitOfWork</code> has set of <code>Repository</code> instances that it creates and makes them available through get properties. This solves in a better way the thing that I didn’t like in the previous post. The <code>DbContext</code> is created and owned by the <code>UnitOfWork</code>, and because it is the one that creates the repositories it may pass it to the <code>Repository</code> class.</p>
<p>The drawback of this is that for a client class is quite difficult to know which are the repositories that are available on a specific <code>UnitOfWork</code> instance. It does not know what get properties to expect from an <code>UnitOfWork</code>. This may lead to inconsistency. As a client I might end up reaching the entities I want through strange paths starting from different repositories. It may also lead developers to add repository properties to <code>UnitOfWork</code> classes as they need them in different context, making the <code>UnitOfWork</code> classes to get fat and difficult to maintain.</p>
<p>The generic <code>UnitOfWork</code> class presented in the article helps, but even that may become quite complex when there are many specific repository implementations that need to be created and cached in the dictionary. Mainly the <code>UnitOfWork</code> turns into a Service Locator implementation for repositories of different entity types.</p>
<p><strong>Blog Post: <a href="http://blog.longle.net/2013/05/11/genericizing-the-unit-of-work-pattern-repository-pattern-with-entity-framework-in-mvc/?ref=oncodedesign.com">Generically Implementing the Unit of Work &amp; Repository Pattern with Entity Framework in MVC &amp; Simplifying Entity Graphs</a></strong></p>
<p>This implementation has the same way of implementing a <code>UnitOfWork</code> as the previous. The <code>UnitOfWork</code> has a generic repository through a get property.</p>
<p>The article shows the idea of abstracting the <code>DbContext</code> under an <code>IDbContext</code> interface. With this, the <code>UnitOfWork</code> and <code>Repository</code> no longer depend on a specific context, but on this abstraction. This can help in separating the data access implementation in another assembly than the one in which we have generated the code for a specific context. This interface may be useful when you want that Dependency Injection or Service Locator to get directly involved on creating context instances and you need an interface for configuring the container. Otherwise, the same separation may be achieved with an abstract factory.</p>
<p>Another interesting thing is the <code>RepositoryQuery&lt;T&gt;</code>. The <code>Repository.Get()</code> does not return an <code>IEnumerable</code> nor an <code>IQueryable</code>. It returns this <code>RepositoryQuery&lt;T&gt;</code> which is somehow in the middle. It limits the queries which may be defined, by giving <code>Include</code>, <code>Filter</code> or <code>OrderBy</code> functions and when you say <code>.Get()</code> it composes the expressions and executes the query on the underlying repository.</p>
<p>It is hard to see the cases where the complexity added by this extra indirection pays off. It brings this common place where we can make sure that the queries written by upper layers may be adapted to work the same on any query provider, but unless we know that we need to easily replace EF with something else… might not worth doing it. Another advantage, I can think of, is that it enforces consistency in the client queries, but this needs to be balanced with the limitations it brings.</p>
<p><strong>Blog Post: <a href="http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application?ref=oncodedesign.com">Implementing the Repository and Unit of Work Patterns in an ASP.NET MVC Application</a></strong></p>
<p>This article is part of a tutorial on <a href="http://www.asp.net/?ref=oncodedesign.com">www.asp.net</a> about getting started with EF and MVC. It presents the step by step the process of evolving from a simple repository, which does nothing more than wrapping a <code>DbContext</code>, towards a generic repository and then towards a unit of work that has more repository instances. It follows the same idea. An <code>UnitOfWork</code> class which provides a generic repository through a get property. The <code>UnitOfWork</code> takes care of creating and caching multiple repositories instances, one for each entity type. This may turn it into a complex factory.</p>
<p>Here, the <code>Get()</code> function of the repository returns an <code>IEnumerable</code>. It receives through input parameters different parts of a query. There are parameters to give a filter expression, an order by expression or a string to specify the related entities. In my view this is an unfortunate choice. It does not give to the client a fluent API to specify the query. The client code has to tweak the parameters in different ways which increases the complexity on its side. There will also be cases when new <code>Get()</code> overloads are needed to write in them the LINQ queries directly against the <code>DbContext</code>. This may lead to inconsistency. I would rather return <code>IQueriable</code>, or if not then have a <code>Get()</code> without parameters which returns <code>IEnumeralble</code> and explicitly ask for specific repository implementations which define other <code>Get()</code> overloads for the specific queries.</p>
<p><strong>Conclusions</strong></p>
<p>I think the biggest take away from these blog posts is that there are many ways we can implement the repository and unit of work patterns. All implementations will achieve the same goal which is separating data access concern from the other concerns. The implementations differ because they are used in different context. All have pluses and minuses. What may work in one project, may not work in other, because the tradeoffs that are made differ from project to project. What is a critical plus for some teams in some contexts, may not be useful for others. What is not a big minus in some cases may prove to ruin projects in other contexts.</p>
<p>The more implementations of a pattern you see, the more design ideas you will get when you need to implement it for your current case. The more projects you’ve done where such patterns were used, the easier and faster it will be to implement it again for a new context or to evaluate possible implementations.</p>
<h5 id="manyimplementationsofdataaccessarediscussedindetailinmycodedesigntraining">Many implementations of data access are discussed in detail in my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="featuredimagecredit1tjfvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_1tjf?ref=oncodedesign.com">1tjf via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Separating Data Access Concern ]]>
            </title>
            <description>
                <![CDATA[ In our days most of the applications that have a relational database as storage, use an ORM to access the data. The ORM (Entity Framework, Hibernate, etc.) does most of the data access implementation. Many of them have a modern API for querying data and for creating sessions of editing ]]>
            </description>
            <link>https://oncodedesign.com/blog/separating-data-access-concern/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b89</guid>
            <category>
                <![CDATA[ abstraction ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 31 Mar 2015 10:47:43 +0300</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/22060527_s.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In our days most of the applications that have a relational database as storage, use an ORM to access the data. The ORM (Entity Framework, Hibernate, etc.) does most of the data access implementation. Many of them have a modern API for querying data and for creating sessions of editing and saving changes. They also provide mechanisms to hook on events of data being loaded or before data is saved.</p>
<p>However, using the ORM API directly into the classes which implement the business logic (and business flows) or into the Controllers (or ViewModels) which have the UI logic is not a good idea in most of the cases. It would lead to low maintainability and high costs of change. The main reasons for this are poor consistency on how the data access is done throughout the entire app and mixture of business logic with data access concerns.</p>
<p>Think that we are going to use EF in the entire application to access the database. In a large code base there will be places where the EF context is created in the constructor of the class that uses it, and there will be places where the context will be created in the method that needs to get or alter some data. In some cases the context will be disposed by the class that created it and in others it will not be disposed at all. There will be cases where it is passed in as a parameter by the caller code because of the entities which are bound to it. There will also be cases where under certain circumstances entities are going to be attached to newly created contexts. Does this sound familiar?</p>
<p>All these signal poor consistency which leads to increased complexity. When we add error handling, logging or multiple contexts for multiple databases the complexity increases exponentially and becomes uncontrollable. Adding features like: data auditing, data localization, instrumentation (for performance measurements) or enhancing the data access capabilities in any other ways, becomes very costly. It implies going through the entire code base where EF context was used and do these changes. When business logic is not well separated from data consistency validations, enhancing data access capabilities will most likely affect the use case functionalities. We’ll introduce bugs. Our code will smell as rigid and fragile.</p>
<p>To avoid all the above, we can abstract the data access and encapsulate its implementation by hiding the EF from the caller code. The caller code will not be able to do data access in any other ways, but the ones defined by our abstraction. The abstraction has to be good enough not to limit the capabilities of the underlying ORM, and to allow the implementation to hide it, without leaking any dependencies to above layers.</p>
<p>In the rest of the post I will detail such an implementation. Its code source is available on <a href="http://github.com/iQuarc?ref=oncodedesign.com">iQuarc github account</a>, in the <a href="http://github.com/iQuarc/DataAccess?ref=oncodedesign.com"><code>DataAccess</code> repository</a>. It is designed as a reusable component, which can be installed as a <a href="http://www.nuget.org/packages/iQuarc.DataAccess/?ref=oncodedesign.com">NuGet package</a>.</p>
<p>The main goals of this library are:</p>
<ul>
<li>to support a consistent development model for reading or modifying data in the entire application</li>
<li>to enforce separation of concerns by separating data access from business logic</li>
</ul>
<p>One of the main ideas is that the assemblies that need to do data access (the ones that implement the business logic) are not allowed to have any references to the Entity Framework assemblies. They can only depend and use the public interface exposed by the <code>DataAccess</code> assembly. They will reference things like <code>System.Data</code> or <code>System.Core</code> and will take full advantage of <code>LINQ</code> or <code>IQueryable</code>, but they do not know that EF is behind. As far as they are concerned any ORM that gives a compatible <code>IQueryProvider</code> implementation may be used as implementation of the abstract <code>DataAccess</code> interfaces they use. The figure below illustrates this:</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/dataaccess.png" alt="DataAccess diagram" loading="lazy"></p>
<p>This enforces consistency on how data access is done in the entire application. Any class that needs to access data has to use the <code>DataAccess</code> interfaces, because they can no longer create or get an EF context. Now, each time we need to change or enhance the data access implementation, there is one central place to do this.</p>
<p>Another important aspect revealed by above diagram is that the database model classes are separated into their own assembly (<code>DataModel</code>). Since with the code first support, we can generated simple POCOs, which have no dependencies to the EF, as classes mapped to the database tables. These POCOs remain as simple as they are, and they change only when the database model changes. They will not have logic.</p>
<p>Having this plus the constraint that the <code>DataAccess</code> assembly cannot reference the <code>DataModel</code> assembly we assure that the business logic does not get mixed with the data access concerns. Inside the <code>DataAccess</code> we cannot write any business logic (not even validations), because it does not know about domain entities and in the other assemblies we cannot have data access concerns because they are well encapsulated into the <code>DataAccess</code> assembly.</p>
<p>Now, let’s explore in more detail the code.</p>
<p><code>IRepository</code> and <code>IUnitOfWork</code> are the main interfaces of the public API the <code>DataAccess</code> library offers. Besides them, there are few more types, but these give the development patterns of doing data access.</p>
<p><a href="http://github.com/iQuarc/DataAccess/blob/master/src/iQuarc.DataAccess/IRepository.cs?ref=oncodedesign.com"><code>IRepository</code></a> is meant to read data. It is a generic interface that supports queries starting from any entity.</p>
<pre><code class="language-language-csharp">/// &lt;summary&gt;  
/// Generic repository contract for database read operations.  
/// &lt;/summary&gt;  
public interface IRepository  
{  
	/// &lt;summary&gt;  
	/// Gets the entities from the database.  
	/// &lt;/summary&gt;  
	/// &lt;typeparam name=&quot;T&quot;&gt;The type of the entities to be retrieved from the database.&lt;/typeparam&gt;  
	/// &lt;returns&gt;A &lt;see cref=&quot;IQueryable&quot; /&gt; for the entities from the database.&lt;/returns&gt;  
	IQueryable&lt;T&gt; GetEntities&lt;T&gt;() where T : class;
	
	/// &lt;summary&gt;  
	/// Creates a new unit of work.  
	/// &lt;/summary&gt;  
	/// &lt;returns&gt;&lt;/returns&gt;  
	IUnitOfWork CreateUnitOfWork();  
}
</code></pre>
<p>In most of the cases it can be received through dependency injection into the constructor of a class that needs to deal with data. Because its implementation is light (entities are for read only), its scope may be larger and its implementation can be disposed when the operation ends (see my <em><a href="https://oncodedesign.com/disposable-instances-series/">Disposable Instances Series</a></em> blog posts on how to handle the <code>IDisposable</code> with <code>Dependency Injection</code>).</p>
<p>The examples below, show the code patterns for reading data:</p>
<pre><code class="language-language-csharp">private readonly IRepository rep; // injected w/ Dependency Injection  
public IEnumerable&lt;Order&gt; GetAllLargeOrders(int amount)  
{  
	var orders = rep.GetEntities&lt;Order&gt;()  
					.Where(o =&gt; o.OrderLines.Any(ol =&gt; ol.Ammount &gt; amount)  
	return orders.ToList();  
}
</code></pre>
<p>Queries may be reused and returned to be enhanced or composed by the caller code, in the same operation scope:</p>
<pre><code class="language-language-csharp">private readonly IRepository rep; // injected w/ Dependency Injection  
private IQueriable&lt;Order&gt; GetAllLargeOrders(int amount)  
{  
	var orders = rep.GetEntities&lt;Order&gt;()  
		.Where(o =&gt; o.OrderLines.Any(ol =&gt; ol.Ammount &gt; amount)  
	return orders;  
}

public IEnumerable&lt;OrderSummary&gt; GetRecentLargeOrders(int amount)  
{  
	int thisYear = DateTime.UtcNow.Year;  
	var orders = GetAllLargeOrders(amount)  
		.Where(o.Year == thisYear)  
		.Select(o =&gt; new OrderSummary
	
	return orders;  
}     
</code></pre>
<p>The <a href="https://github.com/iQuarc/DataAccess/blob/master/src/iQuarc.DataAccess/IUnitOfWork.cs?ref=oncodedesign.com"><em>IUnitOfWork</em></a> interface is used to modify data.</p>
<pre><code class="language-language-csharp">/// &lt;summary&gt;  
/// A unit of work that allows to modify and save entities in the database  
/// &lt;/summary&gt;  
public interface IUnitOfWork : IRepository, IDisposable  
{  
	/// &lt;summary&gt;  
	/// Saves the changes that were done on the entities on the current unit of work  
	/// &lt;/summary&gt;  
	void SaveChanges();
	
	/// &lt;summary&gt;  
	/// Saves the changes that were done on the entities on the current unit of work  
	/// &lt;/summary&gt;  
	Task SaveChangesAsync();
	
	/// &lt;summary&gt;  
	/// Adds to the current unit of work a new entity of type T  
	/// &lt;/summary&gt;  
	/// &lt;typeparam name=&quot;T&quot;&gt;Entity type&lt;/typeparam&gt;  
	/// &lt;param name=&quot;entity&quot;&gt;The entity to be added&lt;/param&gt;  
	void Add&lt;T&gt;(T entity) where T : class;
	
	/// &lt;summary&gt;  
	/// Deletes from the current unit of work an entity of type T  
	/// &lt;/summary&gt;  
	/// &lt;typeparam name=&quot;T&quot;&gt;Entity type&lt;/typeparam&gt;  
	/// &lt;param name=&quot;entity&quot;&gt;The entity to be deleted&lt;/param&gt;  
	void Delete&lt;T&gt;(T entity) where T : class;
	
	/// &lt;summary&gt;  
	/// Begins a TransactionScope with specified isolation level  
	/// &lt;/summary&gt;  
	void BeginTransactionScope(SimplifiedIsolationLevel isolationLevel);  
}  
</code></pre>
<p>The pattern here is: read data, modify it and save the changes. These operations should be close one to the other, in a short and well defined scope, like below:</p>
<pre><code class="language-language-csharp">public void ReviewLargeAmountOrders(int amount, ReviewData data)  
{  
	using (IUnitOfWork uof = rep.CreateUnitOfWork())  
	{  
		IQueryable&lt;Order&gt; orders = uof.GetEntities&lt;Order&gt;()  
			.Where(o =&gt; o.OrderLines.Any(ol =&gt; ol.Ammount &gt; amount);  
	
		foreach(var order in orders)  
		{  
			order.Status = Status.Reviewed;  
			order.Cutomer.Name = data.CustomerNameUpdate;  
			…  
		}
		
		ReviewEvent re = new ReviewEvent {…}  
		uof.Add(re)
		uof.SaveCanges();  
	}  
}  
</code></pre>
<p>We want the <code>IUnitOfWork</code> to always be used inside a using statement. To <s>enforce</s> encourage this, we do not register its implementation into the Dependency Injection Container, but we make a factory. The factory may be the <code>IRepository</code> itself. We prefer this, rather than forcing a class, that needs data, to have an extra dependency to an <code>IUnitOfWorkFactory</code>.</p>
<p>Another thing to notice here is that the <code>IUnitOfWork</code> inherits from <code>IRepository</code>. This gives two advantages:</p>
<ul>
<li>queries defined for the repository may be reused on an <code>IUnitOfWork</code> instance to get data for changing it</li>
<li>the <code>IUnitOfWork</code> instance may be passed as an <code>IRepository</code> param to code that should only read data in the same editing context</li>
</ul>
<p>The classes that implement these two interfaces are not part of the public API of the <code>DataAccess</code>. The code from above layers should not use them. To enforce this they may be made internal, which I do when the <code>DataAccess</code> is a project in my VS solution. On the other hand, if it is a reusable library this may be too restrictive and may not work with some Dependency Injection frameworks. Leaving them public requires some development discipline to make sure that they are not newed up in the client code.</p>
<p>Another important part of the public API are the <a href="http://github.com/iQuarc/DataAccess/blob/master/src/iQuarc.DataAccess/IEntityInterceptor.cs?ref=oncodedesign.com"><code>IEntityInterceptor</code></a>interfaces. They give the extensibility to run custom logic at specific moments when entities are being loaded, saved or deleted.</p>
<pre><code class="language-language-csharp">/// &lt;summary&gt;  
/// Defines a global entity interceptor.  
/// Any implementation registered into the Service Locator container with this interface as contract ill be applied to  
/// all entities of any type  
/// &lt;/summary&gt;  
public interface IEntityInterceptor  
{  
	/// &lt;summary&gt;  
	/// Logic to execute after the entity was read from the database  
	/// &lt;/summary&gt;  
	/// &lt;param name=&quot;entry&quot;&gt;The entry that was read&lt;/param&gt;  
	/// &lt;param name=&quot;repository&quot;&gt;A reference to the repository that read this entry. It may be used to ead 	additional data.&lt;/param&gt;  
	void OnLoad(IEntityEntry entry, IRepository repository);
	
	/// &lt;summary&gt;  
	/// Logic to execute before the entity is written into the database. This runs in the same transaction 	ith the Save  
	/// operation.  
	/// This applies to Add, Update or Insert operation  
	/// &lt;/summary&gt;  
	/// &lt;param name=&quot;entry&quot;&gt;The entity being saved&lt;/param&gt;  
	/// &lt;param name=&quot;repository&quot;&gt;A reference to the repository that read this entry. It may be used to ead 	additional data.&lt;/param&gt;  
	void OnSave(IEntityEntry entry, IRepository repository);
	
	/// &lt;summary&gt;  
	/// Logic to execute before the entity is deleted the database. This runs in the same transaction with 	he Save  
	/// operation.  
	/// &lt;/summary&gt;  
	/// &lt;param name=&quot;entry&quot;&gt;The entity being deleted&lt;/param&gt;  
	/// &lt;param name=&quot;repository&quot;&gt;A reference to the repository that read this entry. It may be used to ead 	additional data.&lt;/param&gt;  
	void OnDelete(IEntityEntry entry, IRepository repository);  
}  
</code></pre>
<p>There are global interceptors, which are triggered for all the entities of any type and there are specific entity interceptors, which are triggered for the all entities of one specific type. Interceptors are a good candidate to implement data consistency validations or data auditing. Their implementations belong to above layers and are a key element for keeping business logic outside of the data access assembly.</p>
<p>Another element to mention is the custom exception classes. To prevent breaking encapsulation at error handing, the <code>DataAccess</code> implementation wraps all the EF exceptions, for the errors that need to be passed to the client code, into custom exceptions that are defined. It abstracts the errors and lets the client code to do the error handing without depending on EF specifics.</p>
<p>You can explore further the <code>DataAcess</code> code on its <a href="http://github.com/iQuarc/DataAccess?ref=oncodedesign.com">GitHub repository</a>. To see more examples on how it is used, you can look over the sample from my <a href="https://oncodedesign.com/training-code-desigN/">Code Design training</a>, which are also available as a <a href="http://github.com/iQuarc/Code-Design-Training?ref=oncodedesign.com">GitHub repository</a>.</p>
<h5 id="thisimplementationisdiscussedindetailinmycodedesigntraining">This implementation is discussed in detail in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a></h5>
<h6 id="featuredimagecreditkirillmvia123rfstockphoto">Featured image credit: <a href="http://www.123rf.com/profile_kirillm?ref=oncodedesign.com">kirillm via 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Disposable Instances Series ]]>
            </title>
            <description>
                <![CDATA[ In the past few weeks I have published a set of four posts that deal with disposable instances. These posts describe in detail a working implementation that automatically disposes all the instances that are no longer needed, in a deterministic way. This solution works when we use Dependency Injection or ]]>
            </description>
            <link>https://oncodedesign.com/blog/disposable-instances-series/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b86</guid>
            <category>
                <![CDATA[ Dependency Injection ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 23 Mar 2015 15:12:59 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/10778348_s.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In the past few weeks I have published a set of four posts that deal with disposable instances. These posts describe in detail a working implementation that automatically disposes all the instances that are no longer needed, in a deterministic way. This solution works when we use <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com">Dependency Injection</a> or <a href="http://martinfowler.com/articles/injection.html?ref=oncodedesign.com#UsingAServiceLocator">Service Locator</a> to create instances and we want to prevent, by design, leaks of expensive resources.</p>
<p>The posts started from a discussion on how to deal with a repository implementation, which is disposable by nature, and it evolved into a broader one, because the same challenges and approaches apply to any disposable type.</p>
<p>Even if I didn’t intend them from the beginning to be a series, these posts continue each other and describe the design of a component that may be part of the <em>Application Software Infrastructure</em> of any complex app. They include the problem description, the challenges, the different approaches which may be considered and implementation examples.</p>
<p>The posts are:</p>
<ul>
<li><em><a href="https://oncodedesign.com/who-disposes-your-repository/">Who Disposes Your Repository</a></em> – where I describe the context and the challenges of dealing with a disposable <em>Repository</em> and Dependency Injection. In this post I also compare different alternatives of handling the dispose in a repository implementation</li>
<li><em><a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-1st-approach/">Extending Unity Container for IDisposable Instances (1st approach)</a></em> – here I detail the challenges of achieving automatic dispose with <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity Container</a> and I describe one implementation approach with its pros and cons</li>
<li><em><a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-2nd-approach/">Extending Unity Container for IDisposable Instances (2nd approach)</a></em> – here I continue the post above with another solution for extending Unity with this behavior. This implementation overcomes the shortcomings of the previous one, but raises some design concerns</li>
<li><em><a href="https://oncodedesign.com/disposing-instances-when-using-inversion-of-control/">Disposing Instances when Using Inversion of Control</a></em> – here I complete the solution by addressing the question of when the disposables are  going to be disposed. I give a solution for defining a scope for any operation in any kind of app in a similar way as we have the <em>Request</em> in an web app</li>
</ul>
<h5 id="manysolutionsanddiscussionsliketheaboveareincludedinmycodedesigntraining">many solutions and discussions like the above are included in my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<h6 id="imagebyeteimaging123rfstockphoto">Image by: <a href="http://www.123rf.com/profile_eteimaging?ref=oncodedesign.com"> eteimaging / 123RF Stock Photo</a></h6>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Disposing Instances when Using Inversion of Control ]]>
            </title>
            <description>
                <![CDATA[ In the last few posts I have written about how to deal with IDisposable instances when using Dependency Injection. In the Who Disposes Your Repository I talk about the possibilities and challenges of disposing a repository which is injected. Then in the Extending Unity Container for IDisposable Instances (part 1 ]]>
            </description>
            <link>https://oncodedesign.com/blog/disposing-instances-when-using-inversion-of-control/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b88</guid>
            <category>
                <![CDATA[ Dependency Injection ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 09 Mar 2015 16:54:40 +0200</pubDate>
            <media:content url="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/6482702_s-requests.jpg" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In the last few posts I have written about how to deal with <code>IDisposable</code> instances when using <a href="http://martinfowler.com/articles/injection.html?ref=oncodedesign.com#FormsOfDependencyInjection">Dependency Injection</a>. In the <em><a href="https://oncodedesign.com/who-disposes-your-repository/">Who Disposes Your Repository</a></em> I talk about the possibilities and challenges of disposing a repository which is injected. Then in the <em>Extending Unity Container for IDisposable Instances</em> (<a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-1st-approach/">part 1</a> and <a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-2nd-approach/">part 2</a>) I show how automatic dispose of all <code>IDisposable</code> instances can be achieved with <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity Container</a>. This post completes the solution by detailing when the <a href="http://msdn.microsoft.com/en-us/library/ff660895(v=pandp.20).aspx?ref=oncodedesign.com#container_scope">Container Hierarchies</a> (aka <em>Scoped Containers</em>) are built and how they work with a <a href="http://martinfowler.com/articles/injection.html?ref=oncodedesign.com#UsingAServiceLocator">Service Locator</a>.</p>
<p>What we want to achieve is that all the <code>IDisposable</code> instances are disposed when no longer needed. Making this more specific, we want that all the <code>IDisposable</code> instances created within one defined scope to be disposed when that scope ends. In C# we have language support for this. We can use the using statement like this:</p>
<pre><code class="language-language-csharp">using (IDisposable o = new DisposableClass())  
{ // scope begins

	// do stuff with o within this scope

} // scope ends  
</code></pre>
<p>Inside the brackets of this using statement we can use the <code>IDisposable</code> instance as we want and we are assured that it will get disposed when the scope ends.</p>
<p>We can use the using statement only if:</p>
<ol>
<li>the begin and the end of the scope is in the control of our code (we write the code that defines the scope)</li>
<li>the creation of the <em>IDisposable</em> instance is in the control of our code (we write the code that calls its constructor)</li>
</ol>
<p>By doing <a href="http://martinfowler.com/bliki/InversionOfControl.html?ref=oncodedesign.com">Inversion of Control</a> we give the control of the above to the frameworks we are using.</p>
<p>When we are using Dependency Injection, we no longer call the constructors. A framework (Unity Container maybe) will call it for us. However, we can still dispose instances, if we put the container itself in the using statement:</p>
<pre><code class="language-language-csharp">using (var scopeContainer = mainContainer.CreateChild())  
{ // scope begins

	// do stuff with o within this scope  
	// all the instance created within this scope are created by the scopeContainer
	
} // scope ends  
</code></pre>
<p>Mainly we create one container for each scope and we dispose it when the scope ends. When the scoped container gets disposed, it will dispose all the <code>IDisposable</code> instances that it created (previous posts show how this can be done with Unity Container). This is how the idea of using <em>Container Hierarchies</em> for disposing instances came into place. Therefore, if we leave to the framework the control of building instances, we also expect the framework to dispose the instances it created. We still need to make sure that when the scope begins a new container is associated with it, and within that scope all the instances are built with the associated container.</p>
<p>When we are in a web application, we are not in the control of defining the scope neither (<em>Inversion of Control</em> again). We might want to consider this scope to be the same with handling a web request or with a web session. For example, we would want to nest all the code that handles a request within a using statement as we did above. Something quite similar would be: when the request begins create the scoped container, then keep it on the request so all the code running on that request could use it to get new instances, and when the request ends dispose it. Again, if the framework is in control of defining the scope we are interesting in, the framework should give us some hooks which we can use to run some code when the scope begins or ends and to give us some object that represents a context for this scope, so we can keep a reference to the scoped container on it.</p>
<p>Most of the web frameworks give such hooks. We are signaled when a request or session begins or ends. There is also an object which is easy to access, which represents the current request or session and on which we can store context information. If we are using ASP.NET MVC, which is designed with Dependency Injection in mind, we can get this done quite easy. Below is a <a href="https://github.com/devtrends/Unity.Mvc5/blob/master/Unity.Mvc5/UnityDependencyResolver.cs?ref=oncodedesign.com">code snippet</a> from <a href="http://www.devtrends.co.uk/?ref=oncodedesign.com">DevTrends</a> github <a href="https://github.com/devtrends?ref=oncodedesign.com">repository</a>, which contains small projects that integrate Unity Container with <a href="https://github.com/devtrends/Unity.Mvc5?ref=oncodedesign.com">ASP.NET MVC</a> and <a href="https://github.com/devtrends/Unity.WebAPI?ref=oncodedesign.com">ASP.NET Web API</a>.</p>
<pre><code class="language-language-csharp">public class UnityDependencyResolver : IDependencyResolver  
{  
	private readonly IUnityContainer _container;
	public UnityDependencyResolver(IUnityContainer container)  
	{  
		_container = container;  
	}
	
	public object GetService(Type serviceType)  
	{  
		if (typeof(IController).IsAssignableFrom(serviceType))  
		{  
			return ChildContainer.Resolve(serviceType);  
		}  
		
		return IsRegistered(serviceType) ? ChildContainer.Resolve(serviceType) : null;  
	}
	
	protected IUnityContainer ChildContainer  
	{  
		get  
		{  
			var scopeContainer = HttpContext.Current.Items[HttpContextKey] as IUnityContainer;
			if (scopeContainer == null)  
				HttpContext.Current.Items[HttpContextKey] = scopeContainer = _container.CreateChildContainer();
				
			return scopeContainer;  
		}  
	}  
	...  
}    
</code></pre>
<p>As you can see here the child container is created first time the framework needs it and then it is recorded on the request object.</p>
<p>When the <em>Service Locator</em> is used it is important that the scoped container gets called whenever a new instance is needed. If the container is used directly, it means that all the code that needs to request a new instance has to be able to go to the current request object, obtain the reference to the scoped container and ask the instance. If we are in an ASP.NET app this is easier because we can use the <code>DependencyResolver.Current</code>, which implements the <em>Service Locator</em> pattern and which with the above integration code will go to the Unity Container recorded on the current request. If we are using another implementation which wraps a dependency container like <a href="http://commonservicelocator.codeplex.com/?ref=oncodedesign.com">Common Service Locator</a> does, you will need to set it up in a way that it uses the current container. The snippet below shows an example for the Common Service Locator.</p>
<pre><code class="language-language-csharp">...  
private void ConfigureDependencyContainer()  
{  
 	Microsoft.Practices.ServiceLocation.ServiceLocator.SetLocatorProvider(() =&gt;  
	{  
 		var scopeContainer = HttpContext.Current.Items[HttpContextKey] as IUnityContainer;

 		if (scopeContainer == null)  
 			HttpContext.Current.Items[HttpContextKey] = scopeContainer = _container.CreateChildContainer();
 	
 		return scopeContainer;  
 	});  
 }  
 ...  
</code></pre>
<p>All the above work well when we are in a web application, but how can we do the same in a context where we do not have a request object (given by a framework) on which we can keep the scoped container? What if we want to scope our context to a custom operation? How can we make sure that whenever <code>ServiceLocator.Current</code> is called from any class or any function in any thread, it will wrap over the current scoped container if the calling code is within an operation or it will go to the main container if it is outside of any operation? Such application examples may be:</p>
<ul>
<li>a windows service which listens on a TCP-IP socket and handles concurrently all the commands that come on the socket. The custom operation would be all the code that handles such a command.</li>
<li>a console application which executes commands received through inline parameters. Here each command implementation would be part of the custom operation to which we would like to scope a child container.</li>
<li>a desktop application, where we would like that all the code behind a screen to be part of an operation to dispose all the instances used while that screen was opened or used.</li>
</ul>
<p>In all these cases we can create the scoped container and put it in a using statement that nests all the code within that operation. The difficulty comes in associating the scoped container with the operation. Where to keep it so all the code that runs within that operation uses it to build new instances. We need an object (like the request in the web app) which can keep this reference. That object should be available (through a static call) to any class or function on any thread which is within the operation. In short <em>ServiceLocator.Current</em> needs to wrap the scoped container of the current operation.</p>
<p>We can implement this by creating an <code>OperationContext</code> class, which represents the context information of a custom operation. When it is built, it creates and stores a scoped container and it makes it available through a static getter. Here is a code snippet of this class</p>
<pre><code class="language-language-csharp">public class OperationContext : IDisposable  
{  
 private readonly IDependencyContainer container;

 private OperationContext()  
 {  
 	container = ContextManager.GlobalContainer.CreateChildContainer();  
 }

 public IServiceLocator ServiceLocator  
 {  
 	get { return container.AsServiceLocator; }  
 }

 public IDictionary Items  
 {  
 	get  
 	{  
 		if (items == null)  
 			items = new Hashtable();  
 		return items;  
 	}  
 }

 public void Dispose()  
 {  
 	DisposeItems();

 	IDisposable c = container as IDisposable;  
 	if (c != null)  
 		c.Dispose();  
 }

 public static OperationContext Current  
 {  
 	get { return ContextManager.Current; }  
 }

 public static OperationContext CreateNew()  
 {  
 	OperationContext operationContext = new OperationContext();  
 	ContextManager.SwitchContext(operationContext);  
 	return operationContext;  
 }  
 ...  
 }  
</code></pre>
<p>Our code which defines the scope will create a new <code>OperationContext</code> when the operation starts and it will dispose it when the scope ends. We can do this with an using statement. The <code>OperationContext.Current</code> gives access to it. It can be called from any class, function on any thread and it gives the current operation. <code>ServiceLocator.Current</code> can wrap <code>OperationContext.Current.ServiceLocator</code> and all existent code which we are nesting within this using statement shouldn’t be modified. This class makes sure that the current operation context information is thread static, but is it passed to new threads which are created within current operation. It also assures that when the operation ends all the disposables it holds (including the dependency injection container) are disposed.</p>
<p>The <code>OperationContext</code> class implementation is inspired from the <a href="https://msdn.microsoft.com/en-us/library/system.web.httpcontext?ref=oncodedesign.com"><code>HttpContext</code></a>class. It uses a <code>ContextManager</code> static class to manage the access to the storage of the context. The context store is abstracted by <code>IContextStore</code> interface. Its implementation has to provide thread static storage for more operations that may exist simultaneous. When we are in a console application or in a windows service, its implementation is based on the <a href="https://msdn.microsoft.com/en-us/library/system.runtime.remoting.messaging.callcontext?ref=oncodedesign.com"><code>CallContext</code></a> class. This assures that the context is passed along on any function call, and also to new threads which may be created from current one.</p>
<p>Having this, we can now define custom operations in any application in a easy way:</p>
<pre><code class="language-language-csharp"> using(OperationContext.CreateNew())  
 { //scope begins

 	// code that implements the operation  
 	// ServiceLocator.Current wraps the scoped container created for this operation.

} // scope ends. OperationContext and all its content are disposed  
</code></pre>
<p>The <code>OperationContext</code> is an abstract solution for giving context information to any operation regardless of the type of application. When used in a web application the <code>IContextStore</code> may be implemented over the <code>HttpContext.Current</code> and the ASP.NET remains in control of managing the context of our operation (web request).</p>
<pre><code class="language-language-csharp">// IContextStore implementation for an ASP.NET app  
public class HttpRequestContextStore : IContextStore  
{  
	public object GetContext(string key)  
	{  
		return HttpContext.Current.Items[key];  
	}
	
	public void SetContext(object context, string key)  
	{  
		HttpContext.Current.Items[key] = context;  
	}  
	
	// setup at app startup (Global.asax.cs)  
	protected void Application_Start()  
	{  
		...  
		ContextManager.SetContextStore(new HttpRequestContextStore());  
	}
	
	// bind OperationContext with a web request  
	class RequestLifetimeHttpModule : IHttpModule  
	{  
		public void Init(HttpApplication context)  
		{  
			context.BeginRequest += (sender, args) =&gt; OnBeginRequest();  
			context.EndRequest += (sender, e) =&gt; OnEndRequest();  
		}
		private void OnBeginRequest()  
		{  
			OperationContext.CreateNew();  
		}
		private void OnEndRequest()  
		{  
			OperationContext.Current.Dispose();  
		}
	}  
	...  
}  
</code></pre>
<p>An integrated implementation of the <code>OperationContext</code>, class can be found on <a href="https://github.com/iQuarc?ref=oncodedesign.com">iQuarc github</a> repository for the <a href="https://github.com/iQuarc/AppBoot?ref=oncodedesign.com">AppBoot library</a>. The <a href="https://github.com/iQuarc/AppBoot/blob/feature/webapi/AppBoot/iQuarc.AppBoot/OperationContext.cs?ref=oncodedesign.com"><code>OperationContext</code></a> there can be used in the same way in any .NET app. An isolated sample of the <code>OperationContext</code> implementation code can be downloaded here:</p>
<p><a href="https://onedrive.live.com/embed?cid=90D40A51822669DB&resid=90D40A51822669DB%21427&authkey=AAeZGR4NhKTuSOA&ref=oncodedesign.com"><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2016/04/zip.jpeg" alt="" loading="lazy"></a></p>
<p>To summarise, <code>OperationContext</code> class gives us the means to easily achieve what we wanted: all the <code>IDisposable</code> instances created within one defined scope to be disposed when the operation ends. It does this by using scoped dependency containers which are bound to the defined scope and are disposed when the scope ends. It also gives an abstract way to easily define such a scope when our code is in control or to bind it to one created and managed by a framework.</p>
<h5 id="manyexamplesanddiscussionsliketheaboveareincludedinmycodedesigntraining">many examples and discussions like the above are included in my <a href="https://oncodedesign.com/training-code-design/">Code Design Training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ E-mail Course: Creating a Blog ]]>
            </title>
            <description>
                <![CDATA[ I have enrolled in John’s Sonmez free e-mail course titled “How to Create a Blog That Boosts Your Career”.


I’ve joined this because I wanted to focus more on improving my blog, and because I care about John’s advice on how to do this. I was already ]]>
            </description>
            <link>https://oncodedesign.com/blog/email-course-creating-a-blog/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b84</guid>
            <category>
                <![CDATA[ blogging ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 19 Feb 2015 10:51:46 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I have enrolled in <a href="http://simpleprogrammer.com/?ref=oncodedesign.com">John’s Sonmez</a> free e-mail course titled “<a href="http://devcareerboost.com/blog-course/?ref=oncodedesign.com">How to Create a Blog That Boosts Your Career</a>”.</p>
<p>I’ve joined this because I wanted to focus more on improving my blog, and because I care about John’s advice on how to do this. I was already acting on some of the advice he gave when talking about how developers can promote their selves or how they can improve their careers. I’ve picked these from his blog (<a href="http://simpleprogrammer.com/?ref=oncodedesign.com">simpleprogrammer.com</a>), when he was on <a href="http://dotnetrocks.com/?ref=oncodedesign.com">.NET Rocks!</a> or in other of his materials that I get by following him on <a href="https://twitter.com/jsonmez?ref=oncodedesign.com">twitter</a>.</p>
<p>I was also curious on how an e-mail course would go. Each lesson is in an e-mail that we (the trainees) receive twice a week. One on Monday, and one on Thursday. The lessons are quite short, few minutes to read them. Each speak about a key aspect to care and think about when building a blog. I see it like developing a set of habits that can lead you towards a fulfilling blog. Each lesson also has a short homework, which in most of the cases we should e-mail back. The nice reward I’ve gotten by doing my homework was the feedback that I’ve received from John. So far, he replied with some extra advice specific to my context to each of my homework e-mails. It makes me feel that this is not just a set of articles sent by e-mail, but it is an actual course with interaction with the trainer over the e-mail. I think it is one of the key factors of success in course ran by e-mail.</p>
<p>One of the things that it made me think hard about, was how to focus my posts on a topic. Which is the theme of by blog? Conclusion: Quality Code Design. The key reason I’ve started a blog in the first place was to share from my experiences. I was lucky enough to be part of many software development projects, in different roles. I’ve been dealing with difficult contexts both from technical and corporate politics perspective. There are many learnings I’ve taken along the way. One of the believes I’ve formed is that the way we organize our code, the way we design the structures in which we place the code, the way we do code design matters a lot in achieving the needed flexibility to accommodate changes in the success factors of a project (requirements, time and budget). I consider a code design to be of good quality if it stands the test of time. If it can be changed at low costs. Therefore, the conclusion I’ve reached is that I will try to focus my writings in this area. Not as stories from the projects I did in the past, but rather experiences or advices that I pick from my current work. Learnings I find worth sharing from the training and coaching I do in different companies, from the consulting and the development I do as part of <a href="http://www.iquarc.com/?ref=oncodedesign.com">iQuarc</a> and from the work I do as a Software Architect at <a href="http://www.isdc.eu/?ref=oncodedesign.com">ISDC</a>. And all, focused on achieving a quality code design.</p>
<p>At this point the course reached its second half. I can’t say I have found many new things, so far. However, it outlined some important aspects and made me think harder of them. The blog theme is one example. Another great benefit for me is that I get a strong confirmation on the practices I was already following. For me this is a good encouragement that I am going on a good path, especially when it comes from someone who is successful in this.</p>
<p>In the lessons to come I hope to get some guidance on aspects like:</p>
<ul>
<li>How to put links from a post. Should all the words that may point to more details, have a link?</li>
<li>How to find and chose good images for the posts. Should all the posts be accompanied by a featured image?</li>
<li>How long an article should be? When is it better to finish a subject in one post and when to split it over more posts?</li>
<li>When is it better to put source code into the post and when is it better to give it as a downloadable zip.</li>
<li>Which are the pros and cons of hosting your blog under a company domain (<em>blog.iquarc.com/florin</em>) or come up with a name for the blog and host it on its own domain (<em>blogname.com)</em></li>
</ul>
<p>For me it was a good choice to enroll. It cost me just some time and I’ve got good advice and strong confirmation in return. A great deal. If you’d want to do the course I think you can still enroll (for a future session maybe) or you could probably get the lessons and go through them at your own pace.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Extending Unity Container for IDisposable Instances (2nd approach) ]]>
            </title>
            <description>
                <![CDATA[ In my previous blog post I detailed an approach of making the Unity Dependency Injection Container ț̦o automatically call Dispose() on all the IDisposable instances it builds and injects. The implementation described there, makes use of custom lifetime managers and it works fine for most of the cases except ]]>
            </description>
            <link>https://oncodedesign.com/blog/extending-unity-container-for-idisposable-instances-2nd-approach/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b83</guid>
            <category>
                <![CDATA[ Dependency Injection ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 17 Feb 2015 10:55:13 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>In my previous <a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-1st-approach">blog</a> post I detailed an approach of making the <em>Unity Dependency Injection Container</em> ț̦o automatically call <code>Dispose()</code> on all the <code>IDisposable</code> instances it builds and injects. The implementation described there, makes use of custom lifetime managers and it works fine for most of the cases except for the <code>PerResolveLifetimeManager</code>. Unity was extended with the <em>PerResolve</em> functionality in a tricky way, which makes my previous IDisposable implementation not to work for it. Therefore, I’ve came up with another solution which works in all cases. I will go into details about it here. Both these blog posts are a continuation of the <a href="https://oncodedesign.com/who-disposes-your-repository">Who Disposes Your Repository</a> post, I’ve written a while ago, where I describe a context where such an automatically disposing mechanism is desired.</p>
<p>For both implementations we consider that a child container is created and associated with each new scoped operation. A scoped operation may be a request or a session in a web application, or it may be a window or a view in a desktop one. During that operation the child container will be used to inject the dependencies. This is also known as using <em>Container Hierarchies</em> or <em>Scoped Containers</em>. When the operation ends, it will dispose the child container, and what we want is that this also calls <code>Dispose()</code> on all the <code>IDisposable</code> instances that were created within that operation.</p>
<h3 id="coresolution">Core Solution</h3>
<p>In the previous implementation, we were recording all the <code>IDisposable</code> instances that are created on the lifetime manager instance associated with a type registration. Now, because this does not work for the PerResolve case (where new <code>PerResolveLifetimeManager</code> instances are constructed during build), we need to come with another place where to record these instances. This new place also needs to be signaled when the owning container gets disposed, so we can dispose them.</p>
<p>The next thing to look at is to use a custom Unity builder strategy. It is one of the most powerful mechanisms that can be used to extend the build mechanism in Unity. To create one, we need to inherit from the <code>BuilderStrategy</code> base class and to override one of its methods to add the code we want to be executed when any new instance gets built or torn down. Here is the code of the base class:</p>
<pre><code class="language-language-csharp">// Represents a strategy in the chain of responsibility. 
// Strategies are required to support both BuildUp and TearDown. 
public abstract class BuilderStrategy : IBuilderStrategy 
{ 
	// Called during the chain of responsibility for a build operation. The 
	// PreBuildUp method is called when the chain is being executed in the 
	// forward direction. 
	public virtual void PreBuildUp(IBuilderContext context) 
	{ 
	}
	
	/// Called during the chain of responsibility for a build operation. The 
	/// PostBuildUp method is called when the chain has finished the PreBuildUp 
	/// phase and executes in reverse order from the PreBuildUp calls. 
	public virtual void PostBuildUp(IBuilderContext context) 
	{ 
	}
	
	/// Called during the chain of responsibility for a teardown operation. The 
	/// PreTearDown method is called when the chain is being executed in the 
	/// forward direction.
	public virtual void PreTearDown(IBuilderContext context) 
	{ 
	}
	
	/// Called during the chain of responsibility for a teardown operation. The 
	/// PostTearDown method is called when the chain has finished the PreTearDown 
	/// phase and executes in reverse order from the PreTearDown calls.  
	public virtual void PostTearDown(IBuilderContext context) 
	{ 
	} 
} 
</code></pre>
<p>For our case, we can override the <code>PostBuildUp()</code> and record all the <code>IDisposable</code> instances that get constructed. In comparison with the previous implementation, here we would keep on the strategy object all the <code>IDisposable</code> instances of different types (all types that implement <code>IDisposable</code>). There, on the lifetime manager we were keeping references only to the instances of the type registered with that lifetime manager object.</p>
<p>The next step is to trigger the dispose, down from the container to the recorded <code>IDisposable</code> instances. In its <code>Dispose()</code> the container disposes two things: the lifetime managers, and the extensions. See below:</p>
<pre><code class="language-language-csharp">protected virtual void Dispose(bool disposing) 
{ 
	if (disposing) 
	{ 
		if (lifetimeContainer != null) 
		{ 
			lifetimeContainer.Dispose(); 
			lifetimeContainer = null;
	
			if (parent != null &amp;&amp; parent.lifetimeContainer != null) 
			{ 
				parent.lifetimeContainer.Remove(this); 
			} 
		}

		// this will trigger the Dispose() into our strategy object 
		extensions.OfType().ForEach(ex =&gt; ex.Dispose()); 
		extensions.Clear(); 
	} 
} 
</code></pre>
<p>The extensions dispose is our trigger point. We hold the <code>IDisposable</code> instances on the builder strategy object, not on the extension. To have them disposed, we can make the strategy object to implement <code>IDisposable</code>. There we can dispose all the instances. Then, going upwards, we can also make the custom extension that adds the strategy to the build strategies chain, to be <code>IDisposable</code>. It may keep a reference to the strategy and when it gets disposed by the container, it can dispose the strategy, which in its turn will dispose all the <code>IDisposable</code> instances that it records on the <code>PostBuildUp()</code>. So, when we put all the things together the code is like below</p>
<pre><code class="language-language-csharp">public class DisposeExtension : UnityContainerExtension, IDisposable 
{ 
	private DisposeStrategy strategy = new DisposeStrategy();

	protected override void Initialize() 
	{ 
		Context.Strategies.Add(strategy, UnityBuildStage.TypeMapping); 
	}
	
	public void Dispose() 
	{ 
		strategy.Dispose(); 
		strategy = null; 
	}
	
	class DisposeStrategy : BuilderStrategy, IDisposable 
	{ 
		private DisposablesObjectList disposables = new DisposablesObjectList();
	
		public override void PostBuildUp(IBuilderContext context) 
		{ 
			if (context != null) 
			{ 
				IDisposable instance = context.Existing as IDisposable; 
				if (instance != null) 
				{ 
					disposables.Add(instance); 
				} 
			}
		
			base.PostBuildUp(context); 
		} 
		… 
		public void Dispose() 
		{ 
			disposables.Dispose(); 
			disposables = null; 
		} 
	} 
} 
</code></pre>
<p>Having this done, if we add the extension to the child container after it is created, things work. All the IDisposable instances get disposed when the child container is disposed. The code snippet below shows the usage case:</p>
<pre><code class="language-language-csharp">
// code in the application start-up area 
UnityContainer container = new UnityContainer(); 
DisposeExtension disposeExtension = new DisposeExtension(); 
container.AddExtension(disposeExtension);

container.RegisterType(new PerResolveLifetimeManager()); 
container.RegisterType(new PerResolveLifetimeManager());

// some instance created out of the scoped operation 
outerScopeSrv = container.Resolve();

// code in the area that marks and new operation. It creates a child container 
using (IUnityContainer childContainer = container.CreateChildContainer()) 
{ 
	var childDisposeExt = new DisposeExtension(); 
	childContainer.AddExtension(childDisposeExt);
	
	// instances built within the operation 
	scopedSrv1 = childContainer.Resolve(); 
	scopedSrv2 = childContainer.Resolve(); 
	…
} // end of the operation -&gt; childContainer.Dispose()

AssertIsDisposed(scopedSrv1); // -&gt; Pass 
AssertIsDisposed(scopedSrv2); // -&gt; Pass 
AssertNotDisposed(outerScopeSrv); // -&gt; Pass

// at application end 
container.Dispose();

AssertIsDisposed(outerScopeSrv); // -&gt; Pass
</code></pre>
<p>However, there are two more aspects that need to be taken care of.</p>
<h3 id="issues">Issues</h3>
<p>The first is about singletons. What happens when a singleton which is <code>IDisposable</code> gets injected through the child container? Now, it will be disposed by the first child container that used it when it gets disposed. From now on, all the other parts of the application will use the already disposed singleton instance. This is certainly a behavior we wouldn’t want. If a singleton is <code>IDisposable</code> (it’s another discussion why would we do that in the first place) it should not be disposed in the first operation that uses it. The code snippet below shows this case.</p>
<pre><code class="language-language-csharp">UnityContainer container = NewContainer();

// register as singleton 
container.RegisterType(new ContainerControlledLifetimeManager());

IService1 singletonSrv = container.Resolve(); 
IService1 scopedSrv;

using (IUnityContainer childContainer = CreateChildContainer(container)) 
{ 
	scopedSrv = childContainer.Resolve(); 
	Assert.AreSame(s, scopedService); 
}

AssertNotDisposed(singletonSrv); // -&gt; Fail

</code></pre>
<p>To fix this, inside the strategy, we need to verify if the instance that is being built is singleton. If yes, it should not be recorded for dispose, in the strategy that belongs to the child container. To do this check, we look if the lifetime manager is ContainerControlledLifetimeManager and if it comes from the parent container. Below is the code that does it</p>
<pre><code class="language-language-csharp">… 
private bool IsControlledByParrent(IBuilderContext context) 
{ 
	IPolicyList lifetimePolicySource; 
	ILifetimePolicy activeLifetime = context.PersistentPolicies 
										.Get(context.BuildKey, out lifetimePolicySource);
	
	return activeLifetime is ContainerControlledLifetimeManager 
						&amp;&amp; !ReferenceEquals(lifetimePolicySource, context.PersistentPolicies); 
} 
</code></pre>
<p>This code already starts to make our extension a bit ugly. It hardcodes that the singletons are being registered with this particular lifetime manager type. If one will create a new custom lifetime manager to implement singletons, but with some additional behavior our extension may not work. Another thing that is a bit ugly, is that we need to search each time an instance is built, for the lifetime manager recursively upwards. However, I don’t see a better way to make this work for this case.</p>
<p>The second thing that needs to be fixed is more difficult to observe. The issue appears when we add this extension both to the parent and to the child container. This may happen either because we want the automatic dispose for the parent container too, or when we have deeper container hierarchies, meaning that we create a child container from another child container because of nested operations (probably not the case in a web app, but it could be the case in a desktop app).</p>
<p>By design, when a child container is created it inherits (by referencing) all the build strategies of the parent container. In general this is a good idea, because the strategies the parent container was configured with are also used by its children, and this brings consistency. However, for our case this is problematic. When the child container builds an IDisposable instance, that instance will be recorded twice: once in the child container DisposeStrategy object, and one more time in the parent DisposeStrategy object (we need to have different DisposeStrategy objects in the parent and in the child, because on these objects we record the IDisposable instances). Recording the instances built by the child container into the parent container strategy object may lead to memory leaks. The child containers are disposed once their operation ends, but the parent container will live typically as long as the application does. Its strategy object will also live with it, and it will grow with WeakRefernce objects for each instance built by all the child containers in all operations. This is bad!</p>
<p>It is also a sign that this approach to implement the automatic dispose, by keeping the reference on a builder strategy may not be in line with the design of Unity. The strategies were not meant to keep state from build to build. They were rather meant to extend the behavior of building instances, by adding logic that operates on the data from the IBuilderContext. However, because PerResolveLifetimeManager is implemented in a questionable way, now we are working around it with another questionable extension :(.</p>
<p>The fix for this case is not nice. In the <code>PostBuildUp()</code> method we need to determine if this strategy object belongs to the container that initiated the current build (the child) or it is an inherited strategy. If doesn’t belong to current container, we should not record currently built instance. The <code>IBuilderContext</code> which we get into the <code>PostBuildUp()</code>, does not contain any information on the container that builds this instance. So we cannot use a container match to distinguish if currently built instance should be registered in current builder strategy object or not. The only way we could distinguish between these cases is to relay on the order in which the strategies are put in the strategy chain which is constructed before a build starts. Always the strategy chain is created by placing the inherited strategies first, and at the end the specific strategies of the current container. Therefore, if the current strategy object is not the last in the strategy chain, it means that it is inherited and we should not record the current instance. We are relying on the ordering done in the <code>StagedStrategyChain</code> class which implements the <code>IStagedStrategyChain</code> interface. It is not very clear if the ordering we are relying on is by design, meaning that it should be preserved in future versions or it is just an implementation detail. Therefore, this should be an attention point when new versions of Unity will be released. The code for this fix is shown in the snippet below:</p>
<pre><code class="language-language-csharp">private bool IsInheritedStrategy(IBuilderContext context) 
{ 
	// unity container puts the parent container strategies before child strategies when it builds the chain 
	IBuilderStrategy lastStrategy = context.Strategies 
									.LastOrDefault(s =&gt; s is DisposeStrategy);

	return !ReferenceEquals(this, lastStrategy); 
}
</code></pre>
<p>In the end, if we put everything in together, we get the desired dispose behavior both for PerResolveLifetimeManager and for TrainsientLifetimeManager with this approach. The entire source code that implements this approach can be downloaded from <a href="https://onedrive.live.com/embed?cid=90D40A51822669DB&resid=90D40A51822669DB%21420&authkey=AMGyr3fC1bbcH38&ref=oncodedesign.com">here</a>.</p>
<h3 id="conclusion">Conclusion</h3>
<p>Both of the two approaches that I’ve presented in the previous blog post and in this one, have pluses and minuses. There is a trade-off to be made when picking one over the other. The first one, which uses the lifetime managers to record the <code>IDisposable</code> instances has a cleaner design. It is in-line with the way Unity was meant to be extended for lifetime and dispose management. Its disadvantage is that it does not work with the <code>PerResolveLifetimeManager</code>.</p>
<p>The second approach, which uses a builder strategy to record the <code>IDisposable</code> instances works with all the lifetime managers, including the <code>PerResolveLifetimeManager</code>. Its disadvantage is that it is an ugly extension. It holds state on a builder strategy and it may rely on some of the current implementation details. These makes it vulnerable to future versions of Unity or when combining it with other extensions.</p>
<p>If I were to choose, I would use the first one if I do not need the <code>PerResolveLifetimeManager</code>. If I need it, then I would fall back to the second one and carefully test if it works with the version of Unity that I am using and with the other extensions that I need. The good part is that switching from one implementation to the other is done by changing the way the container is configured. Usually this code is separated from the rest of the application, so by doing this switch there should be little if any change in the code that implements the use cases. Therefore, a switch with small costs.</p>
<h5 id="manyexamplesliketheaboveareincludedinmycodedesigntraining">Many examples like the above are included in my <a href="https://oncodedesign.com/training-code-design">Code Design training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Extending Unity Container for IDisposable Instances (1st approach) ]]>
            </title>
            <description>
                <![CDATA[ A few weeks ago, in my blog post ‘Who Disposes Your Repository’ I wrote about the challenges of implementing an IDisposable repository, which takes full advantage of the deferred execution of the IQueryable and, which is being injected through Dependency Injection (DI) into the classes that need to read or ]]>
            </description>
            <link>https://oncodedesign.com/blog/extending-unity-container-for-idisposable-instances-1st-approach/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b7e</guid>
            <category>
                <![CDATA[ Dependency Injection ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 26 Jan 2015 11:04:45 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>A few weeks ago, in my blog post ‘<a href="https://oncodedesign.com/who-disposes-your-repository/">Who Disposes Your Repository</a>’ I wrote about the challenges of implementing an <a href="http://msdn.microsoft.com/en-us/library/system.idisposable.aspx?ref=oncodedesign.com">IDisposable</a> repository, which takes full advantage of the deferred execution of the <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562.aspx?ref=oncodedesign.com">IQueryable<T></a> and, which is being injected through <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com">Dependency Injection (DI)</a> into the classes that need to read or store data. I have focused the discussion on the case when the repository is abstracted by an interface, and its implementation is injected through <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com">DI</a>. I have detailed more alternatives on how it could be disposed and the challenges that arise from the stateful nature of a repository, the deferred execution of the <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562.aspx?ref=oncodedesign.com">IQueryable<T></a> and the <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com">DI</a>.</p>
<p>In the end I have argued that especially for large applications, I would prefer that the Dependency Injection Container (DIC) disposes all the <code>IDisposable</code>instances it created and it injected, including the repository. In short, the code that creates an <code>IDisposable</code> instance should also be responsible to call <code>Dispose()</code> on it.</p>
<p>In this post I will discuss some of the challenges of achieving this with <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity Container</a>. By working to extend Unity with this functionality I have ended up implementing two approaches. In this post I will detail the first implementation I did, and in a <a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-2nd-approach">next post</a> I’ll explain the other.</p>
<p>There are many other writings on this. For example I have found useful the posts of <a href="http://www.neovolve.com/post/2010/06/18/Unity-Extension-For-Disposing-Build-Trees-On-TearDown.aspx?ref=oncodedesign.com">Rory Primrose</a> and of <a href="http://thorarin.net/blog/post/2013/02/16/Unity-IoC-lifetime-management-IDisposable-part2.aspx?ref=oncodedesign.com">Marcel Veldhuizen</a> who describe different approaches for disposing instances created by <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a>. They both describe very well how <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> works and how it can be extended for this purpose. However, I was looking for an approach in which <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> remains unknown to the client code. I didn’t want the client code to call <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.iunitycontainer.teardown?ref=oncodedesign.com"><code>Teardown()</code></a> or other <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> specific methods.</p>
<p>I wanted that when an operation ends all the <code>IDisposable</code> instances that were created and injected by <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> within that operation to be automatically disposed. Such an operation may be a request or a session in a web application, or may be a window or a view in a desktop one. In general any well-defined scope in any application.</p>
<p>My approach is based on using the <a href="http://msdn.microsoft.com/en-us/library/ff660895.aspx?ref=oncodedesign.com#container_scope">Container Hierarchies</a> (also known as <em>Scoped Containers</em>). <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a>, as many other dependency containers, supports this. When you call <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.iunitycontainer.createchildcontainer?ref=oncodedesign.com"><code>CreateChildContainer()</code></a> method, it will create a container that inherits all the configurations form the current container. The idea is that when an operation begins, a child container will be created and associated with it. During that operation all the dependency injection will be done using the child container. This means that all the new instances injected during the operation, will be created by the child container. When the operation ends, the child container will be disposed. The <code>Dispose()</code> of the child container should trigger the <code>Dispose()</code> for all <code>IDisposable</code> instances that were created.</p>
<p><a href="https://github.com/devtrends/Unity.Mvc5?ref=oncodedesign.com">Here</a> is a good example on how to associate a child container with a web request on an ASP.NET MVC app or <a href="https://github.com/devtrends/Unity.WebAPI?ref=oncodedesign.com">here</a> for a WebApi app. I will come back in a <a href="https://oncodedesign.com/disposing-instances-when-using-inversion-of-control/">future post</a> on this, to show how this association can be done in a more abstract way which can also work when we are not in a web application.  Now, I will dive into how to extend to trigger the <code>Dispose()</code> from the child container to all the <code>IDisposable</code> instances that it created.</p>
<p>The problem to solve boils down to these:</p>
<ul>
<li>Keep weak references to all the <code>IDisposable</code> instances that were created by the container (the child container)</li>
<li>Make sure that the <code>Dispose()</code> function of <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> will also call <code>Dispose()</code> on the instances we are referencing</li>
</ul>
<p>The first thing I tried, was to make use of the <a href="http://msdn.microsoft.com/en-us/library/ff660872%28v=pandp.20%29.aspx?ref=oncodedesign.com">lifetime managers</a>. <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> defines this as an extension point to allow external code to control how references to object instances are stored and how the container disposes these instances. For example they are used to implement <a href="http://en.wikipedia.org/wiki/Singleton_pattern?ref=oncodedesign.com">Singleton</a>-like instances. When the container is configured, using <a href="http://msdn.microsoft.com/en-us/library/ee650781.aspx?ref=oncodedesign.com"><code>RegisterType()</code></a> method, an instance of a <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.lifetimemanager?ref=oncodedesign.com"><code>LifetimeManager</code></a> has to be also given. The container will keep as part of its configuration, all the instances of the <a href="http://msdn.microsoft.com/en-us/library/ff660872%28v=pandp.20%29.aspx?ref=oncodedesign.com">lifetime managers</a> associated to the configured types, and it will call them when it injects instances of those types. A lifetime manager class is in general simple. It inherits the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.lifetimemanager?ref=oncodedesign.com"><code>LifetimeManager</code></a> base class and by overriding <code>GetValue()</code> and <code>SetValue()</code> methods, it can control the lifetime of the instances of the type it was configured with.</p>
<p>For example if you want that a certain type to be Singleton like, you would use the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.containercontrolledlifetimemanager?ref=oncodedesign.com"><code>ContainerControlledLifetimeManager</code></a> like this:</p>
<pre><code class="language-language-csharp"> container.RegisterType&lt;IMySingletonService, MySingletonService&gt;(
 									new ContainerControlledLifetimeManager());
</code></pre>
<p>If we look into the <a href="https://unity.codeplex.com/SourceControl/latest?ref=oncodedesign.com#source/Unity/Src/Lifetime/ContainerControlledLifetimeManager.cs">code</a>of the <code>ContainerControlledLifetimeManager</code> we see that the lifetime manager keeps a reference to the instance and it will return it each time <a href="http://msdn.microsoft.com/en-us/library/microsoft.practices.unity.lifetimemanager.getvalue.aspx?ref=oncodedesign.com"><code>GetValue()</code></a> is called (for simplicity I modified a bit the code of this class):</p>
<pre><code class="language-language-csharp">public class ContainerControlledLifetimeManager : LifetimeManager, IDisposable  
{  
	private object value;
	
	/// &lt;summary&gt;  
	/// Retrieve a value from the backing store associated with this Lifetime policy.  
	/// &lt;/summary&gt;  
	/// &lt;returns&gt;the object desired, or null if no such object is currently stored.&lt;/returns&gt;  
	public override object GetValue()  
	{  
		return this.value;  
	}
	
	/// &lt;summary&gt;  
	/// Stores the given value into backing store for retrieval later.  
	/// &lt;/summary&gt;  
	/// &lt;param name=&quot;newValue&quot;&gt;The object being stored.&lt;/param&gt;  
	public override void SetValue(object newValue)  
	{  
		this.value = newValue;  
	}
	
	/// &lt;summary&gt;  
	/// Remove the given object from backing store.  
	/// &lt;/summary&gt;  
	public override void RemoveValue()  
	{  
		this.Dispose();  
	}
	
	public void Dispose()  
	{  
		this.Dispose(true);  
		GC.SuppressFinalize(this); // shut FxCop up  
	}
	
	protected virtual void Dispose(bool disposing)  
	{  
		if (this.value != null)  
		{  
			if (this.value is IDisposable)  
			{  
				((IDisposable)this.value).Dispose();  
			}  
			this.value = null;  
		}  
	}  
}
</code></pre>
<p>The first time an instance of <code>MySingletonClass</code> needs to be injected, the <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> container will first call the <code>GetValue()</code> of the lifetime manager. If that returns <code>null</code>, the container will build a new instance, will call the <code>SetValue()</code> of the lifetime manager to pass it back, and then it will inject it further. Next time an instance of same time is needed, the <code>GetValue()</code> will return the previously created one, so the container will not build a new one. Therefore, the Singleton-like behavior.</p>
<p>The <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.transientlifetimemanager?ref=oncodedesign.com"><code>TransientLifetimeManger</code></a> is at the other end. It is used when you want new instances to be created each time they need to be injected. It is also the default. Its <a href="https://unity.codeplex.com/SourceControl/latest?ref=oncodedesign.com#source/Unity/Src/Lifetime/TransientLifetimeManager.cs">code</a> is even simpler because <code>GetValue()</code> always returns null, which makes the container to build new instances each time.</p>
<pre><code class="language-language-csharp"> /// &lt;summary&gt;  
 /// An &lt;see cref=&quot;LifetimeManager&quot;/&gt; implementation that does nothing,  
 /// thus ensuring that instances are created new every time.  
 /// &lt;/summary&gt;  
 public class TransientLifetimeManager : LifetimeManager  
 {  
 	/// &lt;summary&gt;  
 	/// Retrieve a value from the backing store associated with this Lifetime policy.  
 	/// &lt;/summary&gt;  
 	/// &lt;returns&gt;the object desired, or null if no such object is currently stored.&lt;/returns&gt;  
 	public override object GetValue()  
 	{  
 		return null;  
 	}
 
 	/// &lt;summary&gt;  
 	/// Stores the given value into backing store for retrieval later.  
 	/// &lt;/summary&gt;  
 	/// &lt;param name=&quot;newValue&quot;&gt;The object being stored.&lt;/param&gt;  
 	public override void SetValue(object newValue)  
 	{  
 	}
 
 	/// &lt;summary&gt;  
 	/// Remove the given object from backing store.  
 	/// &lt;/summary&gt;  
 	public override void RemoveValue()  
 	{  
 	}  
 }  
</code></pre>
<p><a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> design and documentation encourages using the <a href="http://msdn.microsoft.com/en-us/library/ff660872%28v=pandp.20%29.aspx?ref=oncodedesign.com">lifetime managers</a> to control the disposing of created instances. On its <code>Dispose()</code> the container will call the <code>Dispose()</code> on all the lifetime manager instances that implement <code>IDisposable</code>. However, the <code>TransientLifetimeManger</code> is not <code>IDisposable</code>. This is somehow normal because it does not keep reference to anything, so nothing to <code>Dispose()</code>. To achieve our goal, I have created a <code>DisposableTransientLifetimeManger</code> like this:</p>
<pre><code class="language-language-csharp"> public class DisposableTransientLifetimeManager : TransientLifetimeManager, IDisposable  
 {  
 	private DisposableObjectList list = new DisposableObjectList();

 	public override void SetValue(object newValue)  
 	{  
 		base.SetValue(newValue);
 	
 		IDisposable disposable = newValue as IDisposable;  
 		if (disposable != null)  
 		list.Add(disposable);  
 	}
 
 	public void Dispose()  
 	{  
 		list.Dispose(); // this will call Dispose() on all the objects from the list  
 	}  
 }  
</code></pre>
<p>The <code>SetValue()</code> will populate a list which keeps weak references to all instances which are disposable. On <code>Dispose()</code> it will just dispose all the elements in the list. Simple. The <code>ContainerControlledLifetimeManager</code> is already <code>IDisposable</code> and it does dispose the instance it references. So now, if we use the disposable lifetime managers when we configure the container, all the instances which are <code>IDisposable</code> be disposed when the container gets disposed.</p>
<p>This works fine when we dispose the container that is directly configured, with the disposable lifetime managers. However, what we wanted was to configure once the main container, and then to use child containers for each operation (request) and to dispose the child containers only. The child containers, would use the configuration of the parent. The code snippet below shows this usage case:</p>
<pre><code class="language-language-csharp"> // configuring the main container  
 UnityContainer container = new UnityContainer();  
 container.RegisterType&lt;IService1, Service1&gt;(new DisposableTransientLifetimeManager());  
 container.RegisterType&lt;IService2, Service2&gt;(new DisposableTransientLifetimeManager());

 using (var childContainer = container.CreateChildContainer()) // child container should be associated with an operation (request)  
 {  
 	// some instances created within the operation (request)  
 	s11 = childContainer.Resolve&lt;IService1&gt;();  
 	s12 = childContainer.Resolve&lt;IService1&gt;();
 
 	s21 = childContainer.Resolve&lt;IService2&gt;();  
 	s22 = childContainer.Resolve&lt;IService2&gt;();  
 } //childContainer.Dispose()

 AssertIsDisposed(() =&gt; s11.Disposed); //–&gt; fail  
 AssertIsDisposed(() =&gt; s12.Disposed); //–&gt; fail

 AssertIsDisposed(() =&gt; s21.Disposed); //–&gt; fail  
 AssertIsDisposed(() =&gt; s22.Disposed); //–&gt; fail  
</code></pre>
<p>This does not work as we expected, because when the child container is disposed it does not have any lifetime managers instances in it. This is because it uses the configuration from the parent, including the instances of the lifetime mangers given there. This is by design in <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a>. If the lifetime mangers instances from the parent would not be used, we would get into short lived singletons issues. Think that you configure a certain type to be Singleton. If you get objects of it through main container you get <code>instance1</code>, but if you get it through a child container you would get <code>instance2</code>. To prevent this, the same instance of lifetime manager (the one given at <a href="http://msdn.microsoft.com/en-us/library/ee650781.aspx?ref=oncodedesign.com"><code>RegisterType()</code></a> is used by child containers too.</p>
<p>For our case, what we would like, is that for our disposable lifetime managers, the child container to create new instances of them and to use those to manage the lifetime of the objects it builds and injects. We can achieve this in by creating a custom <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> extension. Extensions are a more powerful way to extend the behavior of <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> and are often used in combination with the <a href="http://msdn.microsoft.com/en-us/library/ff660872%28v=pandp.20%29.aspx?ref=oncodedesign.com">lifetime managers</a>. As any powerful extension mechanism you can tweak the default behavior a lot, but when not used carefully you can work against the original design of the framework and create complexity. In our case we want to achieve the exact same thing as the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.hierarchicallifetimestrategy?ref=oncodedesign.com"><code>HierarchicalLifetimeStrategy</code></a> does for the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.hierarchicallifetimemanager?ref=oncodedesign.com"><code>HierarchicalLifetimeManager</code></a>. So, I pretty much copied its code, into a new generic extension for any hierarchical lifetime manager, like this:</p>
<pre><code class="language-language-csharp"> public class HierarchicalLifetimeExtension&lt;T&gt; : UnityContainerExtension where T : LifetimeManager, new()  
 {  
 	protected override void Initialize()  
 	{  
 		Context.Strategies.AddNew&lt;HierarchicalLifetimeStrategy&lt;T&gt;&gt;(UnityBuildStage.Lifetime);  
 	}

 	/// &lt;summary&gt;  
 	/// A strategy that handles hierarchical lifetimes across a set of parent/child  
 	/// containers.  
 	/// &lt;/summary&gt;  
 	private class HierarchicalLifetimeStrategy&lt;T&gt; : BuilderStrategy  where T : LifetimeManager, new()  
 	{  
 		/// &lt;summary&gt;  
 		/// Called during the chain of responsibility for a build operation. The  
 		/// PreBuildUp method is called when the chain is being executed in the  
 		/// forward direction.  
 		/// &lt;/summary&gt;  
 		public override void PreBuildUp(IBuilderContext context)  
 		{  
 			IPolicyList lifetimePolicySource;
 	
 			var activeLifetime = context.PersistentPolicies.Get&lt;ILifetimePolicy&gt;(context.BuildKey, out lifetimePolicySource);  
 			if (activeLifetime is T &amp;&amp; !object.ReferenceEquals(lifetimePolicySource, context.PersistentPolicies)	)  
 			{  
 				// came from parent, add a new lifetime manager locally  
 				var newLifetime = new T();
 		
 				context.PersistentPolicies.Set&lt;ILifetimePolicy&gt;(newLifetime, context.BuildKey);  
 				context.Lifetime.Add(newLifetime);  
 			}  
 		}  
 	}  
 }  
</code></pre>
<p>Now, if we put all the pieces together, we get the behavior we wanted. The snippet below shows the result.</p>
<pre><code class="language-language-csharp"> // configuring the main container  
 UnityContainer container = new UnityContainer();  
 container.RegisterType&lt;IService1, Service1&gt;(new DisposableTransientLifetimeManager());  
 container.RegisterType&lt;IService2, Service2&gt;(new DisposableTransientLifetimeManager());

 var outterScopeSrv = container.Resolve&lt;IService1&gt;();

 using (var childContainer = container.CreateChildContainer()) // child container should be associated with an operation (request)  
 {  
 	// adding this extension to the child, makes the difference from previous code snippet  
 	childContainer.AddExtension(  
 	new HierarchicalLifetimeExtension&lt;DisposableTransientLifetimeManager&gt;());
 
 	// some instances created within the operation (request)  
 	s11 = childContainer.Resolve&lt;IService1&gt;();  
 	s12 = childContainer.Resolve&lt;IService1&gt;();
 
 	s21 = childContainer.Resolve&lt;IService2&gt;();  
 	s22 = childContainer.Resolve&lt;IService2&gt;();  
 } //childContainer.Dispose()

 AssertIsDisposed(() =&gt; s11.Disposed); //–&gt; success  
 AssertIsDisposed(() =&gt; s12.Disposed); //–&gt; success

 AssertIsDisposed(() =&gt; s21.Disposed); //–&gt; success  
 AssertIsDisposed(() =&gt; s22.Disposed); //–&gt; success

 AssertIsNotDisposed(() =&gt; outerScopeSrv.Disposed); //–&gt; success
</code></pre>
<p>So, when the child container is disposed it will call the <code>Dispose()</code> of all the <code>IDisposable</code> lifetime mangers within the child container. The <code>DisposableTransientLifetimeManager</code> we have created and use is <code>IDisposable</code> and on its <code>Dispose()</code> it will call the <code>Dispose()</code> of all the <code>IDisposable</code> instances that it references. The <code>HierarchicalLifetimeExtension</code> that we have created and added to the child container, makes sure that when an instance is to be build, for a type that was configured in the parent container, a new instance of the same lifetime manager is created and added into the child container to be used from now on when building objects of that particular type.</p>
<p>This approach works well for transient lifetime manager and for most of the other lifetime managers, if are extended with the <code>IDisposable</code> implementation in the same way. It is inline with the <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> design and not difficult to understand and use.</p>
<p>You can download all the source code in a zip file <a href="https://onedrive.live.com/embed?cid=90D40A51822669DB&resid=90D40A51822669DB%21409&authkey=ABuWdeTnjb8QpxE&ref=oncodedesign.com">here</a>.</p>
<p>However, this approach does not work for the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.perresolvelifetimemanager?ref=oncodedesign.com"><code>PerResolveLifetimeManager</code></a>. Yes, it was a bad surprise for me too :(. <code>PerResolveLifetimeManager</code> is not <code>IDisposable</code>, but that’s not the issue. We could make a <em>DisposablePerResolveLifetimeManager</em> as we did for the transient, and collect in it weak references to all <code>IDisposable</code> instances. However, the <code>PerResolveLifetimeManager</code> behavior is implemented with the <a href="https://msdn.microsoft.com/en-us/library/microsoft.practices.objectbuilder2.dynamicmethodconstructorstrategy?ref=oncodedesign.com"><code>DynamicMethodConstructorStrategy</code></a>, and this is problematic. Under certain conditions this strategy creates new instances of the <code>PerResolveLifetimeManager</code>. This seems to work against the original design of <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a>, because by creating more new instances of a lifetime manager within same container instance, the lifetime manager is striped by its main purpose, which is to control the lifetime of the instances of a particular type. Here, the used instance of the lifetime manager is no longer given through <code>RegisterType()</code> method, but it is being created during the object build. So, our current approach does not work for this lifetime manager for two main reasons: not all of the new instances of the <code>PerResolveLifetimeManager</code> get stored into the container to be disposed, and the <code>DynamicMethodConstructorStrategy</code> will build new instances of <code>PerResolveLifetimeManager</code> not of <code>DisposablePerResolveLifetimeManager</code> as we’d want.</p>
<p>For the cases when we would like to use both the <code>PerResolveLifetimeManager</code> and the <code>TransientLifetimeManger</code> I ended up making another implementation to extend <a href="https://unity.codeplex.com/?ref=oncodedesign.com">Unity</a> with same behavior of automatically disposing all the <code>IDisposable</code>instances, when the child container gets disposed. I will detail it in a future <a href="https://oncodedesign.com/extending-unity-container-for-idisposable-instances-2nd-approach">post</a>.</p>
<h5 id="manyexamplesliketheaboveareincludedinmycodedesigntraining">many examples like the above are included in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Low Coupling by Refactoring Towards Higher Cohesion ]]>
            </title>
            <description>
                <![CDATA[ Low coupling and high cohesion go hand in hand. In a low coupled design, the modules, classes and functions have a high cohesion. The other way around is also true: making high cohesive modules, classes or functions leads to a loosely coupled design.


Why is this useful or interesting? The ]]>
            </description>
            <link>https://oncodedesign.com/blog/low-coupling-by-refactoring-towards-higher-cohesion/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b7f</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 27 Nov 2014 12:40:04 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Low coupling and high cohesion go hand in hand. In a low coupled design, the modules, classes and functions have a high cohesion. The other way around is also true: making high cohesive modules, classes or functions leads to a loosely coupled design.</p>
<p>Why is this useful or interesting? The thing is that in designing software we never get directly to a good design. We can’t make a good design from the start. We improve the design of our code by gradually refactoring it. We put into the code the knowledge we learn by experimenting with it. This is code refactoring. Therefore, I think it is important to identify and to know patterns of refactoring, in order to increase our efficiency in getting to a good design. Refactoring towards high cohesive classes and functions is a pattern of refactoring our code to result a lower coupled design. I will present it bellow by walking through an example.</p>
<p>This article turned out to be quite long. For easier reading I have splat it in three. The <a href="#concepts">first part</a> reviews few concepts and the <a href="#example">second part</a> dives into the code, walks us through the small refactoring steps and explains the reasoning behind. At <a href="#summary">the end</a> it summaries the refactoring pattern steps and a few smells to pay attention to.</p>
<hr>
<h3 id="concepts">Part 1: reviewing few concepts</h3>
<p><em>Coupling</em> refers to how tight the connection is between two elements of our system. Lower the coupling is the better isolated the elements of our system are. This means that changes in one element would not trigger changes in the others. Generally, the looser the coupling is the better the design is because it is less resistant to changes. However, some coupling is needed. The system elements do need to interact, and in a loosely coupled design this interaction is well defined through abstractions. The inner details of an element (module or class) are well encapsulated (hidden) and are not relevant when thinking on how to interact with others. When details change, they do not trigger waves of changes in all the system making it resistant to change. So coupling is tolerated where it is needed, meaning when it brings benefits and it should be done through good abstractions. In other words: <strong>Don’t tolerate coupling without benefits!</strong></p>
<p>Identifying the good abstractions and implementing them through well encapsulated elements is not trivial, because we have a limited view on the system in the beginning. We couple things by nature. We put things together when we do not see the whole picture. Then when it is revealed, we need to stop, we need to refactor to break them up. Then we can identify the good abstractions which lead towards a better design.</p>
<p>A class has <em>high cohesion</em> when most of (all off) its fields are used by most of (all of) its methods. The more fields a method manipulates the more cohesive that method is to the class. A class in which each field is used by each method is <em>maximally cohesive</em>. When cohesion is high, the methods and fields of the class are co-dependent and hang together as a logical whole. Generally, it is neither advisable nor possible to create such maximally cohesive classes, but on the other hand we would like the cohesion to be high. For a function, <em>cohesion</em> refers to how closely the operations in a routine are related. The function code should consistently use all the variables it declare and all the input parameters to compute the result.</p>
<p>When we observe a class with poor cohesion, for example a class which has two parts, one made of three public methods which operate on two fields and another one made of two public methods which operate on other three fields, the general pattern to increase its cohesion is to split it in two classes and to make one to use the other. This way we can identify good abstractions and more code reuse opportunities, which will improve our design.</p>
<p>The example we will dive into shows a pattern to refactor to lower coupling, by improving the cohesion. The first step is to improve the public interface of a class by reducing the number of parameters of its functions. This reveals better the low cohesion of the class and then in successive refactoring steps we increase it, by splitting the class into more.</p>
<p>The pattern to reduce the number of parameters from public methods is to transform them into class fields, which are set through the constructor. This makes the class interface to be more cohesive and it reduces the complexity of the caller code. As consequence, it may also reveal the poor cohesion of the class, by now having groups of fields used by groups of methods. The refactor goes in small steps improving the design a little by little in controllable small chunks.</p>
<hr>
<h3 id="example">Part 2: refactoring example</h3>
<p>The class that we are looking at is a class that is used to export XML files that represent pages of data about customers and their orders. The class gives functions that can export XML pages with detailed customer information or only with orders. The core functionality this class implements stays in the logic to fill in a XML with data (to simplify the example I have taken out this code)</p>
<p>To follow easier this refactoring example, you can download the source code file from each refactoring step from github <a href="https://github.com/iQuarc/Code-Design-Training/tree/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling?ref=oncodedesign.com">here</a>. We start with        <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/00_PageXmlExport.cs?ref=oncodedesign.com">00_PageXmlExport.cs</a>, which is the initial state of the code and then when we advance you can look at <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/01_PageXmlExport.cs?ref=oncodedesign.com">01_PageXmlExport.cs</a> after first refactor step, then <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/02_PageXmlExport.cs?ref=oncodedesign.com">02_PageXmlExport.cs</a> for the second refactor step and so on until the fifth refactor step.</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/00-PageXmlExport-cs  
public class PageXmlExport  
{  
  private const string exportFolder = &quot;c:temp&quot;;
 
  public bool ExportCustomerPage(
			string fileNameFormat,  
			bool overwrite,  
			string customerName,  
			int maxSalesOrders,  
			bool addCustomerDetails)  
  {  
  	string fileName = string.Format(fileNameFormat, &quot;CustomerPage&quot;, customerName, DateTime.Now);  
  	string filePath = Path.Combine(exportFolder, fileName);
 
  	if (!overwrite &amp;&amp; File.Exists(filePath))  
           return false;
  
  	PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
 
  	using (var repository = new EfRepository())  
  	{  
  	  if (maxSalesOrders &gt; 0)  
  	  {  
  		var orders = repository.GetEntities&lt;Order&gt;()  
  				.Where(o =&gt; o.Customer.CompanyName == customerName)  
  				.OrderBy(o =&gt; o.OrderDate)  
  				.Take(maxSalesOrders);
 
  		//enrich content with orders  
  		// …  
  	  }
 
  	  if (addCustomerDetails)  
  	  {  
  	      var customer = repository.GetEntities&lt;Customer&gt;()  
  		       .Where(c =&gt; c.CompanyName == customerName);
		  
  		  // enrich content with customer data  
  		  // …  
	     }  
  	}
	 
  	XmlSerializer serializer = new XmlSerializer(typeof (PageXml));  
  	using (StreamWriter sw = File.CreateText(filePath))  
  	{  
  		serializer.Serialize(sw, content);  
  	}

    return true;  
  }  
  …  
}  
</code></pre>
<p><code>ExportCustomerPage(…)</code> function will write on the disk an XML page containing customer details and customer orders, which are read from the database using a Repository implementation. It receives as input parameters a <code>fileNameFormat</code> with the pattern to generate the name of the file and a flag that says whether it should overwrite the file in case it already exits. It also receives the name of the customer to export the data of, and the maximum orders that should be exported. It gets an extra setting weather it should include or not the customer details.</p>
<p>This function evolved over time in this shape. The desire to reuse some of the code that implements the logic of enriching the XML with orders into the one that also enriches the XML with customer data, made the developers to add a the addCustomerDetails flag and correspondent code in here. Also the logic of composing the file name and/or overwriting it was added later, in the easiest way to implement a new request.</p>
<p>Looking further, the same desire to reuse the code that enriches the XML, made it that later a new function which includes external data into the exported XML was added to the same class. This is how <code>ExportCustomerPageWithExternalData(…)</code> got written. It does the same as the above, but if external data is present, then it is included in the XML.</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/00-PageXmlExport-cs  
 public class PageXmlExport  
 {  
  …  
  public bool ExportCustomerPageWithExternalData(  
			string fileNameFormat,  
			bool overwrite,  
			string customerName,  
			int maxSalesOrders,  
			bool addCustomerDetails,  
			PageData externalData,  
			ICrmService crmService,  
			ILocationService locationService)  
  {  
    string fileName = string.Format(fileNameFormat, &quot;CustomerPage&quot;, customerName, DateTime.Now);  
    string filePath = Path.Combine(exportFolder, fileName);
   
    if (!overwrite &amp;&amp; File.Exists(filePath))  
      return false;
 
    PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};

    if (externalData.CustomerData != null)  
    {  
  		// enrich content with externalData.CustomerData  
  		// …  
    }  
    else  
    {  
       CustomerInfo customerData = crmService.GetCustomerInfo(  
       content.Customer.Name);
      
       // enrich content with customer data  
       // …  
    }
 
    using (EfRepository repository = new EfRepository())  
    {  
      if (maxSalesOrders &gt; 0)  
      {  
        var orders = repository.GetEntities&lt;Order&gt;()  
             .Where(o =&gt; o.Customer.CompanyName == content.Customer.Name)  
             .OrderBy(o =&gt; o.OrderDate)  
             .Take(maxSalesOrders);
 
        //enrich content with orders  
      }
 
      if (addCustomerDetails)  
      {  
         var customer = repository.GetEntities&lt;Customer&gt;()  
         .Where(c =&gt; c.CompanyName == customerName);
        
         // enrich content by merging the external customer data with what read from DB  
         // …  
       }  
    }
   
    if (locationService != null)  
    {  
      foreach (var address in content.Customer.Addresses)  
      {  
        Coordinates coordinates = locationService.GetCoordinates(address.City, address.Street, address.Number);  
        if (coordinates != null)  
          address.Coordinates = string.Format(&quot;{0},{1}&quot;, coordinates.Latitude, coordinates.Longitude);  
      }  
    }
   
    XmlSerializer serializer = new XmlSerializer(typeof (PageXml));  
    using (StreamWriter sw = File.CreateText(filePath))  
    {  
       serializer.Serialize(sw, content);  
    }
  
    return true;  
 }  
  …  
}  
</code></pre>
<p>Following the same approach to reuse the core code that enriches the XML, other functions we added in time. <code>ExportOrders(..)</code> exports a XML with the same schema, but with all the orders the customer has and without additional customer data.</p>
<pre><code class="language-language-csharp">// file: http://tinyurl.com/00-PageXmlExport-cs  
public class PageXmlExport  
{  
  …  
  public bool ExportOrders(string fileNameFormat, bool overwrite, string customerName)  
  {  
    string fileName = string.Format(fileNameFormat, &quot;CustomerOrdersPage&quot;, customerName, DateTime.Now);  
    string filePath = Path.Combine(exportFolder, fileName);
 
    if (!overwrite &amp;&amp; File.Exists(filePath))  
      return false;
   
    PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
   
    using (EfRepository repository = new EfRepository())  
    {  
      var orders = repository.GetEntities&lt;Order&gt;()  
      		.Where(o =&gt; o.Customer.CompanyName == content.Customer.Name)  
      		.OrderBy(o =&gt; o.ApprovedAmmount)  
      		.ThenBy(o =&gt; o.OrderDate);
 
      //enrich content with orders  
    }
 
    XmlSerializer serializer = new XmlSerializer(typeof (PageXml));  
    using (StreamWriter sw = File.CreateText(filePath))  
    {  
        serializer.Serialize(sw, content);  
    }
 
    return true;  
  }  
  …  
}  
</code></pre>
<p>Later on, because this was the class that knew how to produce XMLs with customer orders, it got two new methods: <code>GetPagesFromOrders(…)</code> and <code>ExportPagesFromOrders(…)</code>, which were useful when the data should not be taken from the database, but it is given as input parameter.</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/00-PageXmlExport-cs  
public class PageXmlExport  
{  
  …  
  public IEnumerable&lt;PageXml&gt; GetPagesFromOrders(  
		IEnumerable&lt;Order&gt; orders,  
		int maxSalesOrders,  
		ICrmService crmService,  
		ILocationService locationService)  
  {  
    Dictionary&lt;string, IEnumerable&lt;Order&gt;&gt; customerOrders = GroupOrdersByCustomer(orders);  
    foreach (var customerName in customerOrders.Keys) 
    {  
       PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
 
       if (crmService != null)
       {  
           CustomerInfo customerData = crmService.GetCustomerInfo(content.Customer.Name);  
           //enrich with data from crm  
       }
 
       var recentOrders = customerOrders[customerName]  
                    .OrderBy(o =&gt; o.OrderDate)  
                    .Take(maxSalesOrders);  
       foreach (var order in recentOrders)  
       {  
           // enrich content with orders  
           // …  
       }
 
       if (locationService != null)  
       {  
           foreach (var address in content.Customer.Addresses)  
           {  
               Coordinates coordinates = locationService.GetCoordinates(address.City, address.Street, address.Number);   
               
			   if (coordinates != null)  
               		address.Coordinates = string.Format(&quot;{0},{1}&quot;, coordinates.Latitude, coordinates.Longitude);  
           }  
       }
 
       yield return content;  
    }  
  }
 
  public bool ExportPagesFromOrders(  
			string fileNameFormat,  
			bool overwrite,  
			IEnumerable&lt;Order&gt; orders,  
			int maxSalesOrders,  
			ICrmService crmService,  
			ILocationService locationService)  
  {  
     IEnumerable&lt;PageXml&gt; pages = GetPagesFromOrders(orders, maxSalesOrders, crmService, locationService);  
     foreach (var pageXml in pages)  
     {  
        string customerName = pageXml.Customer.Name;  
        string fileName = string.Format(fileNameFormat, &quot;CustomerOrdersPage&quot;, customerName, DateTime.Now);  
        string filePath = Path.Combine(exportFolder, fileName);
 
        if (!overwrite &amp;&amp; File.Exists(filePath))  
            return false;
 
        XmlSerializer serializer = new XmlSerializer(typeof (PageXml));  
        using (StreamWriter sw = File.CreateText(filePath))  
        {  
           serializer.Serialize(sw, pageXml);  
        }  
     }
 
     return true;  
  }  
  …  
}  
</code></pre>
<p>Now, look at the public methods signature only. They have many and redundant parameters. This makes them hard to be used by the caller code.</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/00-PageXmlExport-cs  
public class PageXmlExport  
{  
  …  
  public bool ExportCustomerPage(  
			string fileNameFormat,  
			bool overwrite,  
			string customerName,  
			int maxSalesOrders,  
			bool addCustomerDetails)  
  {…}
 
  public bool ExportCustomerPageWithExternalData(  
			string fileNameFormat,  
			bool overwrite,  
			string customerName,  
			int maxSalesOrders,  
			bool addCustomerDetails,  
			PageData externalData,  
			ICrmService crmService,  
			ILocationService locationService)  
  {…}
 
  public bool ExportOrders(  
			string fileNameFormat,  
			bool overwrite,  
			string customerName)  
  {…}
 
  public IEnumerable&lt;PageXml&gt; GetPagesFromOrders(  
			IEnumerable&lt;Order&gt; orders,  
			int maxSalesOrders,  
			ICrmService crmService,  
			ILocationService locationService)  
  {…}
 
  public bool ExportPagesFromOrders(  
			string fileNameFormat,  
			bool overwrite,  
			IEnumerable&lt;Order&gt; orders,  
			int maxSalesOrders,  
			ICrmService crmService,  
			ILocationService locationService)  
  {…}  
  …  
}  
</code></pre>
<p>The first step of refactor is to reduce the number of parameters of the first two methods. We move some common parameters into the constructor. It results the following code (file <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/01_PageXmlExport.cs?ref=oncodedesign.com">01_PageXmlExport.cs</a>).</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/01-PageXmlExport-cs  
public class PageXmlExport  
{  
  private const string exportFolder = &amp;quot;c:temp&amp;quot;;
 
  private readonly string fileNameFormat; //used in 3/5 methods  
  private readonly bool overwrite; //used in 3/5 methods
 
  private readonly int maxSalesOrders; // used in 3/5 methods  
  private readonly bool addCustomerDetails; // used in 2/5 methods
 
  public PageXmlExport(string fileNameFormat,  
			bool overwrite,  
			int maxSalesOrders,  
			bool addCustomerDetails)  
  {  
  	this.fileNameFormat = fileNameFormat;  
  	this.overwrite = overwrite;  
  	this.maxSalesOrders = maxSalesOrders;  
  	this.addCustomerDetails = addCustomerDetails;  
  }
 
  public bool ExportCustomerPage(string customerName)  
  {…}
 
  public bool ExportCustomerPageWithExternalData(  
		string customerName,  
		PageData externalData,  
		ICrmService crmService,  
		ILocationService locationService)  
  {…}  
  …  
}  
</code></pre>
<p>This already makes the functions to look better. They are easier to be called. The caller can now instantiate this class with the settings it needs, and then call <code>ExportCustomerPage(…)</code> for all the customer names it wants. Previously, it had to repeat most of the parameters for each customer call. If we look at the resulted fields we can see that most of them are used in 3 of the 5 functions. The rest of the code remains pretty much the same.</p>
<p>Continuing on this path, the next small refactor step is to reduce the number of parameters for the next functions. The resulted code is the following (file <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/02_PageXmlExport.cs?ref=oncodedesign.com">02_PageXmlExport.cs</a>).</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/02-PageXmlExport-cs  
public class PageXmlExport  
{  
  private const string exportFolder = &amp;quot;c:temp&amp;quot;;  
  private readonly string fileNameFormat; //used in 3/5 methods  
  private readonly bool overwrite; //used in 3/5 methods
 
  private readonly int maxSalesOrders; // used in 3/5 methods  
  private readonly bool addCustomerDetails; // used in 2/5 methods
 
  private readonly ICrmService crmService; // used in 3/5 methods  
  private readonly ILocationService locationService; // used in 3/5 methods
 
  public PageXmlExport( string fileNameFormat,  
			bool overwrite,  
			int maxSalesOrders,  
			bool addCustomerDetails,  
			ICrmService crmService,  
			ILocationService locationService)  
  { … }
 
  public bool ExportCustomerPageWithExternalData(  
			string customerName,  
			PageData externalData)  
  { … }
 
  public bool ExportOrders(string customerName)  
  { … }
 
  public IEnumerable&lt;PageXml&gt; GetPagesFromOrders(IEnumerable&lt;Order&gt; orders)  
  { … }
 
  public bool ExportPagesFromOrders(IEnumerable&lt;Order&gt; orders)  
  { … }  
  …  
}  
</code></pre>
<p>Now looking again only at the signature of the public functions of this class. They look much better. The public interface is more consistent and this simplifies the caller code.</p>
<p>Analyzing further the refactor result, we see that the class has in the constructor 6 dependencies. This is quite a lot. Looking at the fields, it has 7 fields, from which 2 are not complex (<code>ICrmService</code> and <code>ILocationService</code>). Having so many fields and dependencies is a bad smell. On a closer look we may observe that some fields are used only by some functions and not all functions use all fields. This is a clear symptom of a class with poor cohesion.</p>
<p>To refactor towards a better cohesive class, we should identify groups of fields, used by same methods. For example, the <code>exportFolder</code>, <code>fileNameFormat</code> and <code>overwrite</code> are used by all functions that write to disk. Even more, they are only used in the logic that relates to writing the file. When we identify such a group of dependencies it is an opportunity to reduce the dependencies of the class by moving them into a new class, which will be used by the current one. While doing this, is good to also think about a good abstraction for the class we are going to create, an abstraction that express well the functionality the new class is going to provide. For our example we will have it as an interface which has a function that writes a <code>PageXml</code>. If it is hard to come up with a good abstraction, we can postpone this and do it in two steps: first split the classes and then find a good abstraction. Doing this refactor step it results the following code (file <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/03_PageXmlExport.cs?ref=oncodedesign.com">03_PageXmlExport.cs</a>).</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/03-PageXmlExport-cs  
public interface IPageFileWriter  
{  
   bool WriteFile(PageXml page, string filePrefix);  
}

public class PageXmlExport  
{  
  private readonly IPageFileWriter fileWriter;
  
  private readonly int maxSalesOrders; // used in 3/5 methods  
  private readonly bool addCustomerDetails; // used in 2/5 methods
  
  private readonly ICrmService crmService; // used in 3/5 methods  
  private readonly ILocationService locationService; // used in 3/5 methods
  
  public PageXmlExport( IPageFileWriter fileWriter,  
  			int maxSalesOrders,  
  			bool addCustomerDetails,  
  			ICrmService crmService,  
  			ILocationService locationService)  
  { … }
  
  public bool ExportCustomerPage(string customerName)  
  {  
     PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
     
     using (EfRepository repository = new EfRepository())  
     {  
        if (maxSalesOrders &gt; 0)  
        {  
           var orders = repository.GetEntities&lt;Order&gt;()  
                .Where(o =&gt; o.Customer.CompanyName == content.Customer.Name)  
                .OrderBy(o =&gt; o.OrderDate)  
                .Take(maxSalesOrders);
                
           //enrich content with orders  
           // …  
        }
     
        if (addCustomerDetails)  
        {  
           var customer = repository.GetEntities&lt;Customer&gt;()  
           .Where(c =&gt; c.CompanyName == customerName);
           
           // enrich content with customer data  
           // …  
        }  
     }
   
     return fileWriter.WriteFile(content, &quot;CustomerPage&quot;);  
  }  
  …  
}  
</code></pre>
<p>This refactor step has reduced the number of fields by groping three of them into one. As a consequence it also reduced the dependencies of the class and it made all its methods better by taking out the code that deals with the details of writing a file.</p>
<p>Now, is the moment to think about a better abstraction for the interface we have created: the <code>IPageFileWriter</code>. We could make it more abstract if we rename it and take out or at least rename the <code>filePrefix</code> parameter of its method. The interface in its form limits its implementation to writing to files. A better one would be:</p>
<pre><code class="language-language-csharp">public interface IPageWriter  
{  
    bool Write(PageXml page);  
}  
</code></pre>
<p>This interface may have implementations that could write the XML anywhere. By increasing this abstraction we are increasing the reusability of the <code>PageXmlExport</code> class, by being able to reuse the same code and logic to export in multiple mediums. We are not going to pursue this refactoring path, but we are going to continue with increasing the cohesion of the <code>PageXmlExport</code>.</p>
<p>Lets see other groups of fields that could be extracted. Looking at <code>maxSalesOrders</code> or <code>addCustomerDetails</code> it is hard to see with which to group them. On the other hand we observe that <code>crmService</code> and <code>locationService</code> are used by the functions that enrich the XML with external data. Therefore one idea is to group these two together in a new depenedency. Because it gives data for the export class we name it as <code>IExportDataProvider</code> interface. It results the following code (file <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/04_PageXmlExport.cs?ref=oncodedesign.com">04_PageXmlExport.cs</a>)</p>
<pre><code class="language-language-csharp"> // file: http://tinyurl.com/04-PageXmlExport-cs  
public interface IExportDataProvider  
{  
   CustomerInfo GetCustomerInfo(string name);  
   Coordinates GetCoordinates(string city, string street, string number);  
}

public class PageXmlExport  
{  
   private readonly IPageFileWriter fileWriter;  
   private readonly IExportDataProvider dataProvider;
  
   private readonly int maxSalesOrders; // used in 4/5 methods  
   private readonly bool addCustomerDetails; // used in 2/5 methods
  
   public PageXmlExport( IPageFileWriter fileWriter,  
			int maxSalesOrders,  
			bool addCustomerDetails,  
			IExportDataProvider dataProvider)  
   { … }  
   …  
}  
</code></pre>
<p>The number of fields and dependences got smaller after this step. However, the code of the class did not improve much. It is the same, but instead calling functions from <code>ICrmService</code> respectively <code>ILocationService</code>, now all the calls go to the same <code>IExportDataProvider</code>. <code>IExportDataProvider</code> implementations will wrap together the external services for the export class, but something doesn’t smell good about it. The first bad signal comes from the name. It is named data provider, but it also gets geo location coordinates. Looking  at it from another perspective, we can also see that the interface mixes different levels of abstraction. The <code>CustomerInfo</code> is a high level concept in comparison with <code>Coordinates</code>.</p>
<p>Looking again at the resulted code in <code>PageXmlExport</code> class, we see that it now has this <code>IExportDataProvider</code>, but it also uses a repository to get data from the database. Maybe a better idea would have been to wrap also the repository in the <code>IExportDataProvider</code>. The repository is another dependency our class has, but it is not that visible. The refactoring that we have done made us think to the dependencies more, and now with the <code>IExportDataProvider</code> as a visible dependency in the constructor and the repository being used in most of the methods, it is much more obvious that they should be grouped together as one dependency.</p>
<p>Lets rollback the last refactor step that we did, and wrap the <code>ICrmService</code> together with the repository in a real <code>IExportDataProvider</code>. Thinking on doing this we also observe that the other dependencies our class has (<code>maxSalesOrders</code> and <code>addCustomerDetails</code> settings) are only used together with the hidden dependency towards the repository. They all could be wrapped into the <code>IExportDataProvider</code>. The <code>IExportDataProvider</code> implementations should depend on these settings and them should instruct the repository accordingly. In fact, by doing this we are taking out from the <code>PageXmlExport</code> class the responsibility of being setup about how to read data and we push that lower to <code>IExportDataProvider</code> implementations. The resulted code after this refactor step is the following (file <a href="https://github.com/iQuarc/Code-Design-Training/blob/master/LessonsSamples/LessonsSamples/Lesson7/CohesionCoupling/05_PageXmlExport.cs?ref=oncodedesign.com">05_PageXmlExport.cs</a>).</p>
<pre><code class="language-language-csharp"> // file http://tinyurl.com/05-PageXmlExport-cs  
public class PageXmlExport  
{  
  private readonly IPageFileWriter fileWriter;  
  private readonly IExportDataProvider_5 dataProvider;  
  private readonly ILocationService locationService;
  
  public PageXmlExport( IPageFileWriter fileWriter,  
			IExportDataProvider dataProvider,  
			ILocationService locationService)  
  {  
     this.fileWriter = fileWriter;  
     this.dataProvider = dataProvider;  
     this.locationService = locationService;  
  }
  
  public bool ExportCustomerPage(string customerName)  
  {  
     PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
  
     IEnumerable&lt;CustomerData&gt; orders = dataProvider.GetCustomerOrders(customerName);
  
     // enrich content with orders  
     // ..
     
     // enrich content with customer data  
     // ..
  
    return fileWriter.WriteFile(content, &quot;CustomerPage&quot;);  
  }
  
  public bool ExportCustomerPageWithExternalData(  
			string customerName,  
			PageData externalData)  
  {  
     PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
  
     if (externalData.CustomerData != null)  
     {  
         // enrich with externalData.CustomerData  
         // …  
     }  
     else  
     {  
         CustomerInfo customerData =  
         dataProvider.GetCustomerInfo(content.Customer.Name);  
     }
  
     IEnumerable&lt;CustomerData&gt; orders = dataProvider.GetCustomerOrders(customerName);
     
     // enrich content with orders  
     // …
     
     // enrich content by merging the external customer data with what read from DB  
     // …
  
     foreach (var address in content.Customer.Addresses)  
     {  
        Coordinates coordinates = locationService.GetCoordinates(address.City, address.Street, address.Number)   ;     
        if (coordinates != null)  
           address.Coordinates = string.Format(&quot;{0},{1}&quot;, coordinates.Latitude, coordinates.Longitude); 
     }
     
     return fileWriter.WriteFile(content);  
  }
  
  public bool ExportOrders(string customerName)  
  {  
     PageXml content = new PageXml {Customer = new CustomerXml {Name = customerName}};
     
     IEnumerable&lt;CustomerData&gt; orders =  dataProvider.GetCustomerOrders(customerName);
  
     //enrich content with orders
  
     return fileWriter.WriteFile(content);  
  }  
  …  
}  
</code></pre>
<p>Looking now at the result, we are definitely in a much better place than when we were when we started. Maybe our code design is not perfect nor the best, but it is definitely better. Now our <code>PageXmlExport</code> class has three dependencies, which are used by most of its functions. Most of the code in the class got smaller and simpler. The methods follow a simple pattern: they get data using the <code>IExportDataProvider</code>, they enrich the content XML with it (which is the core functionality and knowledge of our class), and they output the result. The details of reading data and output the result are not important. Our class can focus on XML enrichment only.</p>
<p>Maybe we do not yet have the best abstractions in place, but now after we have separated better the concerns it is a good moment to think more about abstraction and encapsulation. The code that gets data from the database or other sources is now decoupled from the core code that enriches the XML. We have achieved the same decoupling for the code that writes the XML on the disk. The details of how these are done are no longer mixed with XML enrichment logic and by abstracting them we can now reuse the XML enrichment code to read data from any medium and output the results to any medium. In conclusion we have achieved a better Separation of Concerns, and a more loosely coupled design, by following a simple refactoring pattern. We did this almost in a mechanical way looking to reduce the number of parameters or the number of dependencies.</p>
<hr>
<h3 id="summary">Part 3: summarizing the refactoring pattern</h3>
<p>If we recap the refactoring pattern we have followed to achieve loose coupling by improving coherence we have the following steps or recommendations:</p>
<ol>
<li>Be critical with functions that have more than about 4 parameters. It may be that they are part of poor cohesive classes or in they have a poor cohesion themselves.
<ul>
<li>reduce the number of parameters of functions by transforming the common (redundant) ones into class fields received through the constructor.</li>
<li>do this in small refactoring steps</li>
<li>this may reveal a class with poor cohesion</li>
</ul>
</li>
<li>Be critical with classes that contain more than about 7±2 fields (more toward the high end of 7±2 if the fields are primitives and more toward the lower end of 7±2 if the fields are pointers to complex objects or services)
<ul>
<li>the number 7±2 has been found to be a number of discrete items a person can remember while performing other tasks</li>
<li>reduce the number of fields by grouping the ones that are used by same functions into one new field. This leads to taking out code, and it increases the coherence of the original class</li>
</ul>
</li>
<li>Be critical with classes that have more than about 4±1 dependencies - most probably they have poor cohesion
<ul>
<li>most probably the dependencies may be reduced by grouping and extracting them in new classes. This increases the cohesion and leads to a loosely coupled code</li>
<li>think about putting in place good abstractions for new extracted classes. This will increase the reusability and extensibility of the code.</li>
<li>think about different levels of abstractions and different levels reasons of change when you try to improve the abstractions that define the interaction between your classes</li>
</ul>
</li>
</ol>
<p>By following these with the <a href="#example">above example</a> we have refactored from a class that mixes code with details of creating and writing into files, with code that implements high level business logic (XML enrichment) and with code that deals with data access concerns, into a more decoupled design where all these concerns are separated and may be abstracted better.</p>
<h5 id="manyexamplesliketheaboveareincludedinmycodedesigntraining">many examples like the above are included in my <a href="https://oncodedesign.com/training-code-design">Code Design Training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Who Disposes Your Repository ]]>
            </title>
            <description>
                <![CDATA[ Recently, I’ve went again through the discussion of how the Repository Pattern works with Dependency Injection (DI) in one of the projects I’m involved in. Even if these patterns are around for a while and there are many examples on how to use them together, discussing some particular ]]>
            </description>
            <link>https://oncodedesign.com/blog/who-disposes-your-repository/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b7d</guid>
            <category>
                <![CDATA[ code design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 14 Oct 2014 12:01:43 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Recently, I’ve went again through the discussion of how the <a href="http://martinfowler.com/eaaCatalog/repository.html?ref=oncodedesign.com">Repository Pattern</a> works with <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com">Dependency Injection (DI)</a> in one of the projects I’m involved in. Even if these patterns are around for a while and there are many examples on how to use them together, discussing some particular implementation aspects is still interesting. So, I think it might help to go once again through it, maybe from a different angle.</p>
<p>The case I want to focus on, is when the repository implementation is abstracted through a generic interface, and its implementation is injected through DI to the classes that need to read or store data.</p>
<p>Using DI in an application it is often a good idea. It is an easy way to follow the <em>Separate Construction and Configuration from Use</em> principle (<a href="https://portal.isdc.eu/sites/People/Employee_Life_Cycle/Performance_Management/Lists/EvaluationBriefings/AllItems.aspxhttp:/blog.8thlight.com/uncle-bob/archive.html?ref=oncodedesign.com">Uncle Bob</a> explains it in his <a href="http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882?ref=oncodedesign.com">Clean Code</a> book and also <a href="http://martinfowler.com/?ref=oncodedesign.com">Martin Fowler</a> does it <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com#SeparatingConfigurationFromUse">here</a>). Abstracting the repository through an interface is also good. It doesn’t only benefit you in writing isolated Unit Tests, but it brings a good separation of the data access concerns from the business logic. It prevents the data access specifics to leak into the upper layers, if you encapsulate them well into the implementation of the interface. Using them together is also good, but you need to consider the stateful nature of a repository and the <a href="http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx?ref=oncodedesign.com">deferred execution</a> of an <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562(v=vs.100).aspx?ref=oncodedesign.com">IQueryable<T></a>.</p>
<p>The question I want to address is who disposes my repository implementation and when. The repository will use some connection to the storage, which should be released when is no longer needed. And that should be in a deterministic way, not when the garbage collector kicks in. If the implementation of my repository is through the <a href="http://msdn.microsoft.com/en-us/data/ef.aspx?ref=oncodedesign.com">Entity Framework (EF)</a>, it means that I should dispose the <a href="http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext(v=vs.113).aspx?ref=oncodedesign.com">DbContext</a> when I no longer need it. For the EF case this may not be too problematic since it does a clever connection management. <a href="http://blog.jongallant.com/2012/10/do-i-have-to-call-dispose-on-dbcontext.html?ref=oncodedesign.com#.VDfR6PmSx8G">Here</a> is a response of the EF team on this. However, it may not be the same for other ORMs. Even more I think that <a href="http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx?ref=oncodedesign.com">IDisposable pattern</a> should be correctly implemented and followed in all cases. I read <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a> as a clear statement that says that the instances of its implementation must be cleaned in a deterministic way.  I don’t think it’s a good idea to just leave instances undisposed, because I may risk nasty leaks.</p>
<p>I prefer to have an open interface for my repository by taking the benefits the <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562(v=vs.100).aspx?ref=oncodedesign.com">IQueryable<T></a> brings. This means that the user of my repository can write or alter the query by itself, and I will not need to add a new methods to my repository each time a new use case needs a different kind of filtering or a different data projection. This is easily achieved with an interface like this:</p>
<pre><code class="language-language-csharp">interface IRepository  
{  
	Queryable&lt;T&gt; GetEntities&lt;T&gt;();    
	…  
}  
</code></pre>
<p>The clients of my repository may add additional filtering or data selection before the query is actually executed on the database server. Combining it with DI we may have class services like this using it:</p>
<pre><code class="language-language-csharp">class SalesOrdersService : ISalesOrdersService  
{  
 // The concern of calling repository.Dispose()  
 // is not in this service  
 private readonly IRepository repository;  
 …

public SalesOrdersService(IRepository repository, …)  
{  
   this.repository = repository;  
   …  
}

public decimal GetHighRiskOrdersAmount(int year)  
{  
    IQueryable&lt;Order&gt; orders = GetHighValueOrders()  
    		                    .Where(o =&gt; o.Year == year);
   
    decimal amount = 0;  
    foreach (var order in orders)  
    {  
         if (IsHighRisk(order))  
         amount += order.OrderLines.Sum(ol =&gt; ol.Amount);  
         // only read DB the order lines of high risk orders.  
         // important if expect that only 10% of orders are high risk  
    }

    return amount;  
 }

public int CountHighRiskOrders(int startWithYear, int endWithYear)  
{  
    IQueryable&lt;Order&gt; orders = GetHighValueOrders()  
 								.Where(o =&gt;  o.Year &gt;= startWithYear &amp;&amp;  
 											 o.Year &lt;= endWithYear);

    // lets say I’m keen on performance and I only want to iterate  
    //    once through the resultset.  
    // If I would use the return order.ToArray().Count(IsHighRisk),  
    //    instead of the foreach, there is one iteration for ToArray 
	//    and one for the Count()  
    int count = 0;  
    foreach (var o in orders)  
    {  
        if (IsHighRisk(o))  
        count++;  
    }
   
    return count;  
 }

 public IEnumerable&lt;Order&gt; GetOrders(int startingWith, int endingWith)  
 {  
    return GetHighValueOrders()  
              .Where(order =&gt; order.Year &gt;= startingWith &amp;&amp;  
              order.Year &lt;= endingWith);

  // I have the flexibility to:  
  // -return IEnumerable, so the query executes when iterated first  
  // -return IQueryable, so the query can be enriched by the client before execution  
  // -return IEnumerable, but with an List underneath so the query executes here (call .ToList())  
 }

 // This functions makes reuseable the is HighValue evaluation.  
 // The evaluation of the condition can be translated through  
 // LINQ to SQL into a WHERE filtering which runs in the database  
 private IQueryable&lt;Order&gt; GetHighValueOrders()  
 {
       var orders = from o in repository.GetEntities&lt;Order&gt;()  
                          join ol in repository.GetEntities&lt;OrderLine&gt;() on o.Id equals ol.OrderId  
                    where ol.Amount &gt; 100  
                    select o;

       return orders;  
  }  
 …  
}  
</code></pre>
<p>With this approach, I also get the benefit that for a new use case, I could maybe only wire-up in a different way some of my existent services, reusing them, and all will get through DI the same instance of the repository. Therefore, the underlying implementation has the opportunity to send all the queries from one session / request on the same storage connection. Even more, if my services return the <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562(v=vs.100).aspx?ref=oncodedesign.com">IQueryable<T></a> as <a href="http://msdn.microsoft.com/en-us/library/system.collections.ienumerable(v=vs.110).aspx?ref=oncodedesign.com">IEnumerable</a>, the query will be executed only when the client needs it.</p>
<p>Coming back to disposing the underling <a href="http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext(v=vs.113).aspx?ref=oncodedesign.com">DbContext</a>, one option is to make the repository implementation to also implement IDisposable, and to call <a href="http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext.dispose(v=vs.113).aspx?ref=oncodedesign.com">DbContext.Dispose()</a> when it is disposed:</p>
<pre><code class="language-language-csharp">class Repository : IRepository, IDisposable  
{  
   private MyDbContext context = new MyDbContext();

   public IQueryable&lt;T&gt; GetEntities&lt;T&gt;()  
   {  
      return context.Set&lt;T&gt;().AsNoTracking();  
   }

   public void Dispose()  
   {  
      context.Dispose();  
   }  
 …  
 }  
</code></pre>
<p>Now, I have all the benefits I described above, but who is going to call the Dispose() of my repository implementation. The services should not do it. They do not know that the implementation they got injected is an <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a> and they shouldn’t know it. Making them also to implement <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a>, and to check on Dispose() if some of their members are <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a>, is not a solution either. My rule of thumb is that the code that creates an <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a> instance, should also call dispose on it. It is also according with the <a href="http://msdn.microsoft.com/en-us/library/ms182328.aspx?ref=oncodedesign.com">Code Analyses Rules</a>. In our case this is the DI Container. Our code does not create, nor explicitly asks for the instance, it is just injected into it. By using DI I am inverting the control of creating instances from my own code to the framework. The framework should also clean them up when no longer needed.</p>
<p>For this, we need to make sure that two things happen:</p>
<ol>
<li>the DIC will call <a href="http://msdn.microsoft.com/en-us/library/system.idisposable.dispose(v=vs.110).aspx?ref=oncodedesign.com">Dispose()</a> on all the disposable instances it created, and</li>
<li>it will do it in a deterministic way (when no longer needed, meaning when the request  / session ends)</li>
</ol>
<p>If you’re using <a href="http://msdn.microsoft.com/en-us/library/ff647202.aspx?ref=oncodedesign.com">Unity Container</a> you have to take care of both. When an instance of the <a href="http://msdn.microsoft.com/en-us/library/microsoft.practices.unity.unitycontainer.aspx?ref=oncodedesign.com">UnityContainer</a> is disposed, it will call <a href="http://msdn.microsoft.com/en-us/library/system.idisposable.dispose(v=vs.110).aspx?ref=oncodedesign.com">Dispose()</a> on all the <a href="http://msdn.microsoft.com/en-us/library/ff660872(v=pandp.20).aspx?ref=oncodedesign.com">lifetime managers</a> which are <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a>. However, the built-in lifetime managers Unity provides for short lived instances, are not disposable, so you need to write your own. <a href="http://www.neovolve.com/post/2010/06/18/Unity-Extension-For-Disposing-Build-Trees-On-TearDown.aspx?ref=oncodedesign.com">Here</a> are some examples on how to do it. Other DIC, like <a href="http://msdn.microsoft.com/en-us/library/dd460648(v=vs.110).aspx?ref=oncodedesign.com">MEF</a>, have this built in.  The other thing to care of is: when the Dispose() call chain will be kicked off. For this you need to use <a href="http://msdn.microsoft.com/en-us/library/ff660895(v=pandp.20).aspx?ref=oncodedesign.com">Scoped Containers</a> (aka Container Hierarchies). In short, this means that you will need to associate a request / session with a child container instance, and dispose that child container when the request ends. The dispose of the child container will trigger the dispose of all <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable </a>instances it created. <a href="https://github.com/devtrends/Unity.Mvc5?ref=oncodedesign.com">Here</a> is a simple example on how to do this for an <a href="http://www.asp.net/mvc?ref=oncodedesign.com">ASP.NET MVC</a> application, where a child container is associated with each HTTP Request.</p>
<p>Even if this approach gives a lot of flexibility and advantages, is not easy to setup. It requires some non-trivial code to be written. In some applications the added complexity may not be justified. Let’s explore other ideas, too.</p>
<p>We could make the IRepository interface to be <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a> and not use DI to get the repository implementation, but use <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com#UsingAServiceLocator">Service Locator</a> instead. The main difference from the above is that we are no longer inverting the control. Our code is now in charge of explicitly ask for a repository, so it should also take care of cleaning it up when it is no longer needed. Now, we don’t need to go through all the trouble of making the DI to call Dispose(), because our code will do it. The services now can use a using statement for each repository instance, like:</p>
<pre><code class="language-language-csharp">class SalesOrdersService : ISalesOrdersService  
{  
  private readonly IServiceLocator sl;

  public SalesOrdersService(IServiceLocator serviceLocator)  
  {  
      this.sl = serviceLocator;  
  }

  public decimal GetHighRiskOrdersAmount(int year)  
  {  
     using (IRepository repository = sl.GetInstance&lt;IRepository&gt;())  
     {  
        IQueryable&lt;Order&gt; orders = GetHighValueOrders(repository)  
                 .Where(o =&gt; o.Year == year);

        decimal amount = 0;  
        foreach (var order in orders)  
 	    {  
 			if (IsHighRisk(order))  
 				ammount += order.OrderLines.Sum(ol =&gt; ol.Ammount);  
 	    }

 	    return ammount;  
 	 }  
  }

  public IEnumerable&lt;Order&gt; GetOrders(int startingWith, int endingWith)  
  {  
      using (IRepository rep = sl.GetInstance&lt;IRepository&gt;())  
      {  
      		return GetHighValueOrders(rep)  
      					.Where( order =&gt; order.Year &gt;= startingWith &amp;  
      							order.Year &lt;= endingWith);  
      }  
  // Is the above code correct? Have another look
 
  // the returned IEnumerable will be enumerated by the caller  
  // code after the rep.Dispose() –&gt; error.
 
  // To avoid this more (uncontrollable) approaches may be taken  
  // –&gt; error prone –&gt; low maintainability
 
  // -receive the repository instance created by the caller code.  
  // The caller code creates it, so it needs to dispose it, not me
 
  // -call .ToList() on this, but still return it as IEnumerable  
  // What happens, when the caller iterates the  
  // order.OrderLines? –&gt; error
 
  // -this function is refactored to only send a query,  
  // which should be wrapped and executed by the caller  
}  
</code></pre>
<p>We are losing some of the flexibility and advantages of the above. The main one is that now an <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562(v=vs.100).aspx?ref=oncodedesign.com">IQueryable<T></a> issued by the repository is scoped to the using statement, not to the request / session. If it is passed as return value to another class, and it gets executed after the using statement ends, an error will occur because the underlying connection is closed. Therefore, I need to be careful that the return values of the services that use a repository should be executed. This means that my clients cannot add to the query additional filters, nor they can decide when / if to execute it. This reduces the flexibility and reusability of my code. This may not be that bad in some applications. One of the biggest risks I see with this is that by trying to workaround these limitations, there may be code that passes around an undisposed repository to different classes or functions (like <code>GetHighValuesOrders()</code> function has). Without discipline, this may get out of hand.</p>
<p>Another different approach is not to make the repository implementation disposable, and to dispose the underlying connection (<a href="http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext(v=vs.113).aspx?ref=oncodedesign.com">DbContext</a>), immediately after it is no longer needed. This implies that the <a href="http://msdn.microsoft.com/en-us/library/vstudio/bb351562(v=vs.100).aspx?ref=oncodedesign.com">IQueryable<T></a> (better said LINQ to SQL), will not leave the repository scope. This makes the repositories to be use case specific, with a close interface.</p>
<pre><code class="language-language-csharp">class OrdersRepository : IOrdersRepository  
{  
 	public IEnumerable&lt;Order&gt; GetOrders(int startingWith,  
 	int endingWith,  
 	decimal highValueAmount)  
 	{  
 		using (var db = new MyDbContext())  
 		{  
 			var orders = from o in db.Orders  
 				join ol in db.GetEntities&lt;OrderLine&gt;() on o.Id equals ol.OrderId  
 				where 	o.Year &gt;= startingWith &amp;&amp; 
				 		o.Year &lt;= endingWith &amp;&amp;	
						ol.Amount &gt; highValueAmount  
 				select o;
		 
 			return orders.ToList();  
 		}  
 	}  
 	…  
}  
</code></pre>
<p>This approach is the most rigid and it makes me add new functions and new repositories each time a functionality is added or changed. I usually do not use it, unless I am dealing with a storage which doesn’t support LINQ.</p>
<p>I prefer to implement the first approach described above, even more in large applications, with many use cases. It is a better choice when you are dealing with more types of implementations which use expensive external resources, which need to be released as soon as possible in deterministic way. In other words, I am addressing with this all the <a href="http://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx?ref=oncodedesign.com">IDisposable</a> implementations (not only the repository), which may be injected through DI.</p>
<p>The first approach and the second one are not exclusive. I may get through DI the repository for some cases and use the service locator where I explicitly want to be more closed to my clients and I would need to dispose the repository even sooner than the request / session would end. I very often use them together: the repository through DI for read only cases, where I would want to more flexibility (more reads on same repository instance), and the repository through <a href="http://www.martinfowler.com/articles/injection.html?ref=oncodedesign.com#UsingAServiceLocator">Service Locator</a> for read write cases, where I want a smaller scope for my unit of work.</p>
<hr>
<h5 id="manydesigndiscussionsliketheaboveareincludedinmycodedesigntraining">many design discussions like the above are included in my <a href="https://oncodedesign.com/training-code-design/" title="Code Design Training">Code Design Training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Unit Testing Training ]]>
            </title>
            <description>
                <![CDATA[ Unit Testing has been one of the dearest technical subjects for me in the past years. A great influencer was a TDD Workshop lead by J. B. Rainsberger to which I was lucky enough to attend, somewhere in 2010. It was a very good fit with the moment of my ]]>
            </description>
            <link>https://oncodedesign.com/blog/about-unit-testing-training/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b80</guid>
            <category>
                <![CDATA[ course ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 09 Sep 2014 17:54:24 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Unit Testing has been one of the dearest technical subjects for me in the past years. A great influencer was a TDD Workshop lead by <a href="http://www.jbrains.ca/?ref=oncodedesign.com">J. B. Rainsberger</a> to which I was lucky enough to attend, somewhere in 2010. It was a very good fit with the moment of my career. I was starting to realize that we are quite far in the way we are developing software from the principles and disciplines advocated by different industry leaders. I was looking for concrete methods to improve the way me and my team colleagues w software.</p>
<p><a href="http://www.jbrains.ca/?ref=oncodedesign.com">JB</a> not only that has taught us what Unit Testing and TDD are, but he also demonstrated a powerful technique: getting to a modular design driven by unit tests and by some simple rules. He showed how to achieve a design that is inexpensive to change, by minimizing the accidental complexity. This has made me to see in unit testing one of the most efficient tools for assuring a sustainable quality level of the code in the projects I was involved in. It motivated me to teach others to write Unit Tests that benefit them in the quality level of their production code.</p>
<p>I wanted that the next projects I will start to work on to begin with a training on unit testing. I wanted that the entire team writes and cares about unit testing in a consistent manner. This is how my Unit Testing training was born. I have initially developed it for a newly created team that was starting a challenging project on a tight schedule. I knew from the start that it will not be possible for the very few seniors to review enough and do enough pair-programming with everyone, so we can have a sustainable quality level in our code. We needed a flexible design which could accommodate the volatile requirements we were supposed to deal with in a predictable way and at reasonable costs. I thought that if everyone would cover with good written unit tests their code, than we would end up with a design that is good enough. Maybe not the best one, but good enough. And, it worked! Even if we were on a tight schedule, the initial investment in learning and writing unit testing, paid off in the end.</p>
<p>Since then I have redone the training many times for different teams in <a href="http://www.isdc.eu/?ref=oncodedesign.com">ISDC</a> and for some of their clients. We have a seen many benefits not only at the project startup, but also for ongoing projects with existent teams. It is always about reading and changing code, rather than writing code. Unit testing, if done the right way, can significantly increase the efficiency and predictability of the changes we do in code.</p>
<p>Another motivation to build this training was to share with others my experiences of learning unit testing. For many teams it is very important to start on the right track with unit testing. When unit testing is done incorrectly it can cause schedules to slip, waste of time, and it can low motivation, and code quality. It is a double-edge sword, which many teams learn to master the hard way. One of the lessons I’ve learned the hard way was: “<em>Never treat your unit tests as second class citizens</em>”. It was in one of the first projects I was learning unit testing on. We’ve ended up after a few weeks with a tests suite, which was breaking at any changes we were doing to the production code. We couldn’t refactor or redesign the production code, because the tests would break. It wasn’t because of poor design of the production code side, it was because of the poor quality of our unit testing code. We have created a too strong coupling in test code, which turned out hurting in the production code as well. I and my team colleague ended up in drawing sticks on who is going to spend the weekend to fix the tests. I wouldn’t want anyone to repeat this painful experience.</p>
<p>In the past few weeks I have taken the time to review, refresh, enrich and restructure the entire training material of my course. I have added the topics and examples for which, until now, I never had the time to detail them as I wanted.</p>
<p>At start, the content was largely built based on <a href="http://osherove.com/?ref=oncodedesign.com">Roy Osherove</a>’s <a href="http://artofunittesting.com/?ref=oncodedesign.com"><em>The Art of Unit Testing</em></a> book. Now I have enriched it with new elements from the books and articles of <a href="http://www.threeriversinstitute.org/blog/?ref=oncodedesign.com">Kent Beck</a> or <a href="http://martinfowler.com/?ref=oncodedesign.com">Martin Fowler</a> and others. I have also taken to opportunity to reshape the training with my experiences from last few years and to add the new things I have learned from pairing with different people at code-retreats or on other occasions.</p>
<p>The training focuses on two main aspects: <em>maintainability</em> of the tests and <em>the benefits</em> on the quality of the production code. The tests code may be a significant amount from all the lines of code forming a project. However, even if it is big in size it is important that it does not increase the complexity, so it has low maintainability costs. The greatest benefit I see from doing <em>Good</em> Unit Tests is in the design of the production code. By learning a few aspects about how to write your tests you can exercise a positive pressure on your production code leading it towards a better design.</p>
<p><em>UPDATE:</em> I have published a presentation page with the full format of the entire course <a href="https://oncodedesign.com/training-unit-testing/" title="Unit Testing Training">here</a>.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Why I Write Isolated Unit Tests ]]>
            </title>
            <description>
                <![CDATA[ Most of the times when I talk about unit testing or I ask encourage my colleagues to write unit tests, I emphasis on writing GOOD Unit Tests which are easy to write, they test only one thing and they run in isolation. I always make a clear distinction between these ]]>
            </description>
            <link>https://oncodedesign.com/blog/why-i-write-isolated-unit-tests/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b82</guid>
            <category>
                <![CDATA[ design ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Fri, 01 Aug 2014 13:11:34 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Most of the times when I talk about unit testing or I <span style="text-decoration:line-through;">ask</span> encourage my colleagues to write unit tests, I emphasis on writing <em>GOOD Unit Tests</em> which are <em>easy to write</em>, they <em>test only one thing</em> and they <em>run in isolation</em>. I always make a clear distinction between these very granular and highly isolated unit tests, which I call <em>GOOD Unit Tests</em> and the <em>Integration Tests.</em> I roughly name <em>Integration Tests</em> any automated test that has more than one point of failure or it exercises more external objects (other classes objects, services, databases, etc.). These tests verify that all those pieces work well together. What I want to focus on instead, are the very granular unit tests, which verify the basic correctness of each unit in isolation and they assert only one thing. When such test fails I know exactly where the problem is, from the test name only.</p>
<p>The greatest benefit we get from writing <em>GOOD Unit Tests</em>, which are <em>easy to write</em>, they <em>test only one thing</em> and they <em>run in isolation</em>, is a better code design that results into the production code. These unit tests put positive pressure on the production code making it better. I am not chasing a bug free code, nor a full coverage in regression testing with these unit tests, but a higher <em>quality code design</em> when I am working with teams that are less experienced in doing a good design.</p>
<p>Yes, there are drawbacks as well. We might get into the other extreme and have the production code design suffer from this granular unit testing, resulting into too small classes, with code smells like feature envy or unnecessary complexity. I totally see the point of <a href="http://david.heinemeierhansson.com/?ref=oncodedesign.com">DHH</a> during the ‘<a href="http://martinfowler.com/articles/is-tdd-dead/?ref=oncodedesign.com">Is TDD Dead?</a>’ debate, that this might lead to creating interfaces or abstract classes only for the sake of mocking. Indeed, it may get to making interfaces which do not create abstractions and which break encapsulation. Yes, granular unit testing will not help in preventing these, on the contrary. When our code design falls into being too granular, practices like eliminate duplication and reduce the dependencies of a class while refactoring, plus a consistent code structure, are helpful to keep things in balance. In the end it all depends on your context: what problem you want to solve, which are your constraints, how experienced is your team and many other aspects.</p>
<p>We couple things by nature. It is more common to put everything in one class or one function rather than thinking about separating the concerns by factors of change. In most of the cases when I am looking over a poorly designed code I am seeing too much coupling (or coupling without benefits) and poor cohesion. Everything is done in one or two big classes. In <span style="text-decoration:line-through;">all</span> most of the cases it is clear that if someone would have tried to write <em>GOOD Unit Tests</em> on that code, it would have been forced to refactor it towards a better design. It wouldn’t be possible to write small unit tests, otherwise. This would have turned the code into a better one. Maybe not the best design, but certainly better. It would break the one thing into more, and the breaking would be driven by test cases, which leads to a good separation of concerns. I often see that the <em>GOOD Unit Tests</em> make the design to move from one like this</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_300/l_text:PT%20Sans_20:One%20big%20thing,g_south,co_rgb:333333/One-big-thing.png" alt="" loading="lazy"></p>
<p>towards one like this</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_300/l_text:PT%20Sans_20:Beter%20design%3F,g_south,co_rgb:333333/Not-too-many-not-too-few.png" alt="" loading="lazy"><br>
which is clearly better than the first one, maybe even better than this</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_300/w_300,h_295,c_pad,g_north/l_text:PT%20Sans_20:Coupled%20things.%20Everything%20is%20talking%20to%20everything,g_south,co_rgb:333333/Too-coupled-things.png" alt="" loading="lazy"></p>
<p>, and clearly better than this</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_300/w_300,h_213,c_pad,g_north/l_text:PT%20Sans_20:Very%20many%20tiny%252C%20small%20things,g_south,co_rgb:333333/Many-small-things.png" alt="" loading="lazy"></p>
<p>, which is at the other extreme than the first one. We want to balance things and to get somewhere in the middle. (<em>These design examples are copied from <a href="http://www.idesign.net/about?ref=oncodedesign.com">Juval Lowy</a> presentation on ‘<a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2010/ARC201?ref=oncodedesign.com">Modular Application Design</a>’, which I value a lot.</em>)</p>
<p>Another important advantage of the <em>Good Unit Tests</em> is that they are easy to write and to maintain, therefore they have low costs. For example if we consider the following setup:</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_400/w_400,h_390,c_pad,g_north/l_text:PT%20Sans_20:testing%20f%28%29,g_south,co_rgb:333333/TestingF.png" alt="" loading="lazy"></p>
<p>where, function <code>f()</code> of class <code>A</code>, calls <code>g()</code> of class <code>B</code>, which in its turn calls <code>h()</code> of class <code>C</code>. After the return of <code>g()</code>, <code>f()</code> may make a call to <code>h’()</code> depending of the result of <code>g()</code>. A quite common setup in an OO program. Now, if we want to write a test to verify the correctness of <code>f()</code> and only <code>f()</code>, it will not be easy. When we write the arrange part of the test, we need to take into account the preconditions and post conditions of all the functions, and find the test data that will make the code flow to be the one we want in our test scenario. This will make our arrange code hard to write. When we assert, things it will also be difficult. The expected result of <code>f()</code> may need to take into account calculations made by the other functions as well. Even if we manage to write the test, it will be hard to maintain. Also, there are high changes for this test to break when refactoring or other changes occur to classes <code>B</code> or <code>C</code>. Another, disadvantage of such integration test is that if it fails we cannot clearly say where the bug is. It may be in classes <code>B</code> or <code>C</code> even though those are not the classes under our test. As a conclusion, this kind of test has high costs (difficult to write and maintain) and low benefits (I don’t know where the bug is, when it fails).</p>
<p>The alternative, is to refactor my code and change <code>B</code> and <code>C</code> with interfaces which abstract them. This will allow me to replace them with fakes (mocks or stubs) in my tests, which are totally under the control of my test code. Now I can easily write many <em>GOOD Unit Test</em> for class <code>A</code>, in isolation to verify its basic correctness. I can write one test to verify ONE thing. Looking more closely to the production code, we’ll see <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612?ref=oncodedesign.com">design patterns</a> and <a href="http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod?ref=oncodedesign.com">SOLID</a> principles that may emerge from this refactoring. For example, we’ll tend to program against interfaces, we’ll depend on abstractions and not on implementation details (partially what <a href="http://docs.google.com/a/cleancoder.com/viewer?a=v&pid=explorer&chrome=true&srcid=0BwhCYaYDn8EgMjdlMWIzNGUtZTQ0NC00ZjQ5LTkwYzQtZjRhMDRlNTQ3ZGMz&hl=en&ref=oncodedesign.com">DIP</a> says), we’ll have our class dependencies visible, we’ll move towards a more extensible and reusable code (partially what <a href="http://docs.google.com/a/cleancoder.com/viewer?a=v&pid=explorer&chrome=true&srcid=0BwhCYaYDn8EgN2M5MTkwM2EtNWFkZC00ZTI3LWFjZTUtNTFhZGZiYmUzODc1&hl=en&ref=oncodedesign.com">OCP</a> says).</p>
<p>It is good to focus to cover with unit test all the logic and calculations. This means the code that has <code>ifs</code>, <code>whiles</code>, <code>fors</code>, etc. and mathematical or other data structure operations. Covering plumbing code, data conversion or basic variable assignment brings little value. When we strive to achieve a good coverage for the logic, because it is easier to fully cover it with <em>GOOD Unit Test</em> rather than <em>Integration Tests</em>, another positive effect happens: the logic gets pushed away from the external frameworks and libraries. The external libraries tend to get abstracted, and our core logic separated from the plumbing code that integrates them. This leads to a better separation of concerns. Our business logic code tends to be pushed away from the presentation and from the data access towards the middle. Wasn’t this something we’ve always wanted?</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_500,h_400,c_pad,g_north/l_text:PT%20Sans_20:Logic%20is%20pushed%20from%20Presentation%252C%20Data%20Access%20and%20Cross-Cutting%20Concerns,g_south,co_rgb:333333/UnitTesting-on-layers.png" alt="" loading="lazy"></p>
<p>Another aspect of <em>GOOD Unit Tests</em> is that we can fully and easily test all the scenarios that the code needs to handle in a very detailed way. I usually think of the car running example. If the car does not move, it is clearly broken. However, we don’t know what part is not working, so we need to verify each part separately to find and fix the problem. Moreover, even if the car moves, we can’t say by this integration test only, if all the parts function at correct parameters. It may be that only after driving more than 1000 km without stopping the engine will get overheated, because of a malfunction in the cooling parts. Coming back to code, I rely on <em>Good Unit Tests</em> to verify the basic correctness in detail, with high coverage. Like in the car example, there may be scenarios that I cannot test otherwise or I would just ignore because it is too hard to be done with integration test. For instance how would you test that your code behaves correctly when the hard-disk is full? Would you fill it to run a tests? Can you do that on the CI server?</p>
<p>Therefore, we target a high coverage with unit tests, and we verify that each of the services work well in isolation.  The unit tests can go in detail at lower costs. Then on top we may have few <em>Integration Tests</em>, which do not go in many details because they are costly. They just check that all the services and components can work together and that all the configurations are consistent. The integration tests usually follow some happy flow scenarios and focus that all the components they touch are running. They do not target correctness as the unit tests do. Below figure, shows three classes fully tested in isolation by unit tests (blue arrows) and two integration tests (purple arrows) on top, which touch all the working components, but fewer code paths.</p>
<p><img src="https://res.cloudinary.com/oncodedesign/image/upload/w_450/w_450,h_420,c_pad,g_north/l_text:PT%20Sans_20:Integration%20tests%20touch%20all%20classes%252C%20but%20have%20lower%20coverage,g_south,co_rgb:333333/Tests-coverage.png" alt="" loading="lazy"></p>
<p>In theory both with isolated <em>Unit Tests</em> and <em>Integration Tests</em> we could reach a full coverage of the entire code we write. With unit tests we need to write many pairs of isolated <em>Collaboration Tests</em> and <em>Contract Tests</em> to reach a full coverage. With <em>Integration Tests</em> we need to be able to specify and simulate all the test scenarios. In general, the costs of writing and maintaining such integration tests which cover all the details, grows exponentially and they are very fragile to change. Any change in the system is very likely to break them, which makes them not feasible. That is why we need to test on intermediate results. We break the end-to-end tests in smaller tests, until we get to the <em>Good Unit Tests</em>, which are at the lowest level checking each detail. By doing this, not only that we get to <span style="text-decoration:line-through;">full</span> a high coverage at lower costs, but we also get better production code in the process. <em>Integration Tests</em> rarely bring any benefits to the design of the production code. However, they contribute to the test suite covering the component integration and the configurations consistency.</p>
<p>To conclude, I always treat my unit test code as first class citizen. It is separated from production or other test code. I like my unit tests to be <em>small</em>, <em>easy to write</em>, to <em>test only one thing</em> and to <em>run in isolation</em>. The highest benefit I want from them is to put pressure on my production code to make it of better quality. I write them in very short cycles, with the production code, even if I do code first, because I want to refactor in small steps. I rely on them for basic correctness and I target a high coverage of the code that does logic and calculations. Separately, I write different levels of <em>Integration Tests</em>. They check that more services or components (which I already know are functioning well in isolation) can work together. They may also check interactions with external services, frameworks or data sources. Sometimes, I also have integration tests for regression testing, performance or load testing.</p>
<h5 id="morediscussionsaboutwritinggoodunittetsarepartofmyunittestingtraining">More discussions about writing good unit tets are part of my <a href="https://oncodedesign.com/training-unit-testing/">Unit Testing Training</a></h5>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ The Code Design Training ]]>
            </title>
            <description>
                <![CDATA[ It was few months ago more than a year ago (22nd of February 2013 in my Trello card, Ideas column), when I came up with the thought of developing a training about how to write good code. At that time I was doing a lot of thinking about how to ]]>
            </description>
            <link>https://oncodedesign.com/blog/the-code-design-training/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b7a</guid>
            <category>
                <![CDATA[ course ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 26 Jun 2014 22:49:38 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>It was <span style="text-decoration:line-through;">few months ago</span> more than a year ago (22<sup>nd</sup> of February 2013 in my <a href="http://www.trello.com/?ref=oncodedesign.com">Trello</a> card, <em>Ideas</em> column), when I came up with the thought of developing a training about how to write good code. At that time I was doing a lot of thinking about how to best use my one extra free day, as I started to work part time. The idea to use part of this time to develop trainings came rather naturally as I was already teaching and coaching colleagues about good code and also holding trainings about good unit tests in <a href="http://www.isdc.eu/?ref=oncodedesign.com">ISDC</a> and for some of their clients. I also realized that holding trainings was a really enjoyable experience for me, so the fit was quite perfect.</p>
<p>What I intended with this kind of course is to express and explain, how all the best practices from the object oriented design world are structured in my mind. It is about how my links and connections among these various design principles and patterns results in the code I write. During my previous coaching sessions, I realized that explaining my vision helped more than just presenting the theoretical concepts. It helped both to get to a better understanding of their usefulness and in writing better code. The <em>Code Design</em> training was born from this desire: put in a clear map all the patterns, principles and practices, and present them using code examples from real projects to any team from any company.</p>
<p>The entire training is structured around of the idea of designing code that <strong>embraces change</strong>. Code that is <strong>easy</strong> to <strong>change</strong>. Code that is <strong>inexpensive</strong> to <strong>change</strong>. Code with <strong>predictable</strong> effects of <strong>change</strong>. <strong>Changeable</strong> code. Change is the leitmotif of my training. What I want to teach is how to drive all the design decisions to minimize the effect of change. That is why this is not an <em>Object Oriented Design</em> training. It is not only about that. It is a <em>Code Design</em> training that focuses on minimizing the cost of change. Because, at the end of the day, the most important benefit quality code is the low cost of change.</p>
<p>I have started to actually work on developing this training somewhere in October 2013 and I worked regularly in my free days left from the regular part time job. I have taken an interactive approach. First I have put on paper the main ideas and topics I wanted to teach. Then I have defined the learning objectives and I have also summarized what would one learn from this course. Next was to put on paper a draft script for the entire course. After that I have  enriched this draft in an iterative way, by identifying materials that would lead to reaching the learning objectives. I have gathered concepts, examples, explanations and ideas from my favorite books, like: <a href="http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882?ref=oncodedesign.com"><em>Clean Code: A Handbook of Agile Software Craftsmanship</em></a> or <a href="http://www.amazon.co.uk/Code-Complete-Practical-Handbook-Construction/dp/0735619670/?ref=oncodedesign.com"><em>Code Complete: A practical handbook of software construction</em></a>, and others that I have read along the years. It ended up to be a selection of concepts and examples from these books structured, interpreted and explained in my own way shaped by my own experiences of writing code and explaining it to others. At some point the training lessons got the shape I was looking for. Each lesson was targeting roughly one or more of the objectives. In the end the training lessons are:</p>
<ul>
<li>Part 1: <em>Good Code Design and Design Patterns</em>
<ul>
<li>Lesson 1: <em>Why Code Quality and Code Design</em></li>
<li>Lesson 2: <em>Clean Code</em></li>
<li>Lesson 3: <em>Separation of Concerns</em></li>
<li>Lesson 4: <em>Design Patterns</em></li>
</ul>
</li>
<li>Part 2: <em>Object Oriented Design Principles and Practices</em>
<ul>
<li>Lesson 5: <em>Object Oriented Design Principles</em></li>
<li>Lesson 6: <em>Dependency Injection</em></li>
<li>Lesson 7: <em>From Principles and Patterns to Practices</em></li>
<li>Lesson 8: <em>Application Infrastructure</em></li>
</ul>
</li>
</ul>
<p>Somewhere along the way I have started to think more seriously about another important stakeholder in this: the companies that would ask me to hold this training for their employees. I have made a commercial offer for the training, which details the outcomes, the benefits and the costs of the training. I have also considered more carefully how I would customize the training for different needs. The training lessons are built in such way that they can be taught individually and still bring value, or only a part of them can be selected into a shorter version of the course for an audience that doesn’t need to go through all the subjects.  Having the offer done, I have sent it through my contacts, to some companies in Cluj. I have got good feedback, which encouraged me even more to dedicate more time and energy to developing the training.</p>
<p>Once I had the course script complete enough, I have started to build the training materials: power point slides, exercises, code samples, and to select videos which can contribute to the learning experience.</p>
<p>Tomorrow is the first time I will hold it as an internal company training, for <a href="http://www.wirtek.ro/?ref=oncodedesign.com">Wirtek</a>. A company here in Cluj, which shows high desire to increase the quality of their services by investing in people. In a way I am glad it happened that the first run of the training is not in the company I regularly work for, because it is in a new context, which makes it a better test for me and for the training itself. I am quite confident that it will go well, and that it will help them, because I have put a lot of effort, energy and thought in it. I’m looking forward to a great day tomorrow!</p>
<p></p>
<p><em>UPDATE:</em> I have published a presentation page with the full format of the entire course <a href="https://oncodedesign.com/training-code-design/" title="Code Design Training">here</a>.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ IT Camp 14 ]]>
            </title>
            <description>
                <![CDATA[ Tomorrow the 4th edition of IT Camp, starts in Cluj-Napoca. It’s a good time to stop from day to day work and look towards the community to share and to learn from the other fellow developer’s experiences.


This year I am going to add to the story of ]]>
            </description>
            <link>https://oncodedesign.com/blog/it-camp-14/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b77</guid>
            <category>
                <![CDATA[ itcamp14 conference Cluj Microsoft ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Wed, 21 May 2014 11:52:32 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>Tomorrow the 4<sup>th</sup> edition of <a href="http://itcamp.ro/?ref=oncodedesign.com">IT Camp</a>, starts in Cluj-Napoca. It’s a good time to stop from day to day work and look towards the community to share and to learn from the other fellow developer’s experiences.</p>
<p>This year I am going to add to the story of how to reach a good Quality Code Design, which can sustain your project over the time. <a href="http://2013.itcamp.ro/?ref=oncodedesign.com">Last year</a> I have talked about why quality is important in software and how it can be achieved by benefiting from the positive pressure the GOOD Unit Tests put on our design. <a href="http://itcamp.ro/agenda.cshtml?ref=oncodedesign.com">This year</a> I am talking about the other part: the Application Infrastructure. They both go hand in hand, by completing each other, in leading your code towards a good quality that can pass the test of time. Usually the Application Infrastructure comes first, but I am saying the story in reverse order.</p>
<p>So, if you’ve seen my last year talk come Friday morning in <em>Verdi room</em> to hear the first part of the story. If you didn’t see it yet, you can see the recording <a href="https://vimeo.com/67109072?ref=oncodedesign.com">here</a>, after Friday, so you can get the story in its natural order.</p>
<p>See you at IT Camp! Have a great conference!</p>
<p></p>
<p>PS:<br>
I hope the title: “<a href="http://itcamp.ro/agenda.cshtml?ref=oncodedesign.com">Quality Code through Application Software Infrastructure</a>” doesn’t mislead you in thinking that it is going to be an IT Pro talk about servers and operating systems. Noting like that. It’s going to be about Code Design.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ IT Camp – A Great Experience ]]>
            </title>
            <description>
                <![CDATA[ IT Camp happened at the end of this week in Cluj. I am proud that the town I am living in hosts the biggest premium conference on Microsoft technologies in Romania. It gathered almost 400 participants, from which a significant part were from other cities. My impression was that most ]]>
            </description>
            <link>https://oncodedesign.com/blog/it-camp-a-great-experience/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b7c</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Sun, 26 May 2013 21:45:31 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p><a href="http://itcamp.ro/?ref=oncodedesign.com">IT Camp</a> happened at the end of this week in Cluj. I am proud that the town I am living in hosts the biggest premium conference on <a href="http://www.microsoft.com/?ref=oncodedesign.com">Microsoft</a> technologies in Romania. It gathered almost 400 participants, from which a significant part were from other cities. My impression was that most of them were experienced software professionals. I hope to see in next editions people coming also from neighbor countries. I think it worth the trip. All the sessions are in English, there are a lot of international <a href="http://itcamp.ro/speakers.cshtml?ref=oncodedesign.com">speakers</a> and there is a great opportunity to meet experienced professionals from <a href="http://www.microsoft.com/?ref=oncodedesign.com">Microsoft</a> technologies world.</p>
<p>The first day was opened by two inspirational keynotes given by <a href="http://www.dotnetrocks.com/?ref=oncodedesign.com">Richard Campbell</a> and <a href="http://www.timhuckaby.com/?ref=oncodedesign.com">Tim Huckaby</a>. I was impressed about how friendly, open and close to the audience <a href="http://twitter.com/richcampbell?ref=oncodedesign.com">Richard</a> is. I think anyone could approach him during the breaks and lunch for any questions or discussions. Same goes for <a href="https://twitter.com/TimHuckaby?ref=oncodedesign.com">Tim</a>. It was a rare opportunity to meet and speak to such great guys. Thanks for being here!</p>
<p>I was honored to be one of the <a href="http://itcamp.ro/speakers.cshtml?ref=oncodedesign.com">speakers</a>. I am glad that I have had the opportunity to speak in front of a large audience and to share my experience. I was a bit nervous to have my session in the 2<sup>nd</sup> day, on the ‘Architecture and Best Practices’ track, after notorious speakers like: <a href="http://www.kulov.net/?ref=oncodedesign.com">Martin Kulov</a>, <a href="peterleeson.wordpress.com">Peter Leeson</a>, <a href="http://www.dotnetrocks.com/?ref=oncodedesign.com">Richard Campbell</a> and <a href="http://www.sese.ro/?ref=oncodedesign.com">Sergiu Damian</a>. However, my story on how you can get to high code quality by doing good unit tests, fitted well after the talks of <a href="https://twitter.com/peterleeson?ref=oncodedesign.com">Peter</a>, <a href="http://twitter.com/richcampbell?ref=oncodedesign.com">Richard</a> and <a href="https://twitter.com/sergiudamian?ref=oncodedesign.com">Sergiu</a>. <a href="https://twitter.com/peterleeson?ref=oncodedesign.com">Peter</a> and <a href="http://twitter.com/richcampbell?ref=oncodedesign.com">Richard</a> touched other faces of quality in software. My good friend and <a href="http://www.rabs.ro/?ref=oncodedesign.com">RABS</a> colleague, <a href="https://twitter.com/sergiudamian?ref=oncodedesign.com">Sergiu</a> walked us through architect’s challenges. I think, I added to both subjects by talking about code quality and the challenge to get there with the whole team. I had a good time speaking, I liked it and I’ll do it again. I hope I’ve inspired my audience in a way or another, and even if maybe some were not convinced about the technique I presented, I hope that they went away with a new perspective, maybe a new author to read or at least with a good story. I was very happy to hear someone that one of his takeaways from the conference was one of the key points of my talk. It made me to feel good <span style="font-family:Wingdings;">J</span></p>
<p>My slides are published <a href="http://www.slideshare.net/FlorinCoros/driving-your-team-towards-code-quality?ref=oncodedesign.com">here</a>. I did not have the time to include a code demo in this talk, but I will do a full demonstration on how good unit tests put a positive pressure on the production code design, next week in Bucharest at <a href="http://itakeunconf.com/good-unit-tests-ask-for-quality-code/?ref=oncodedesign.com">I TAKE Unconference</a> as an Open Session.</p>
<p>One of the sessions I liked a lot was the one given by <a href="WorkBlogpeterleeson.wordpress.com">Peter Leeson</a>. It was perfect! I hope to see him at next editions or other similar events in Romania. I hope that some companies active in Cluj or Romania were inspired to pay more attention to improving the quality of their products or services, because in the end this will be the only thing that would differentiate us from cheaper services in eastern countries or Asia.</p>
<p>All the sessions from <a href="http://itcamp.ro/?ref=oncodedesign.com">IT Camp</a> were recorded and will be available soon on the conference web site.</p>
<p><a href="http://itcamp.ro/?ref=oncodedesign.com">IT Camp</a> was for me a great experience. I’ve met great people and I made some new friends and contacts. Big thanks to <a href="http://www.avaelgo.ro/avaelgoblog?ref=oncodedesign.com">Mihai</a> and <a href="http://www.tudy.ro/?ref=oncodedesign.com">Tudy</a> for organizing this, thank you for giving me the opportunity to speak and thanks you all for attending. See you next year to an even better edition!</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ IT Camp 2013 ]]>
            </title>
            <description>
                <![CDATA[ I will speak this year to ITCamp conference on 23rd of May.


When I have received the invitation (thanks Mihai and Tudy for inviting me) my first thought was whether I should talk about Unit Testing or something else. Then I started to wander why do I like to talk ]]>
            </description>
            <link>https://oncodedesign.com/blog/it-camp-2013/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b79</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Fri, 05 Apr 2013 14:41:02 +0300</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p><img src="http://40.114.216.218/wp-content/uploads/2013/04/040613_0940_itcamp20131.png" alt="" loading="lazy"></p>
<p>I will speak this year to <a href="http://itcamp.ro/?ref=oncodedesign.com">ITCamp</a> conference on 23<sup>rd</sup> of May.</p>
<p>When I have received the invitation (thanks <a href="https://twitter.com/mihai_tataran?ref=oncodedesign.com">Mihai</a> and <a href="http://www.tudy.ro/?ref=oncodedesign.com">Tudy</a> for inviting me) my first thought was whether I should talk about Unit Testing or something else. Then I started to wander why do I like to talk that much about it, so much that I am seen as the unit tests guy by my colleagues, all the projects I’m involved in have a high accent on unit tests, and I even receive birthday cards with: “no unit tests? You’re doing it wrong!”. There are definitely other interesting subjects to talk about. Then I realized it. Unit testing was the only technique with which I managed to persuade one of my teams to write good quality code, which allowed us to easily change it. All the talks about Design Patterns, programming principles didn’t do it. Unit testing did it.</p>
<p>Therefore, this year I will talk at <a href="http://itcamp.ro/?ref=oncodedesign.com">ITCamp</a> about how to drive your team towards quality code, by writing good unit tests. Today I have started to put on paper some ideas on how to structure and build my talk. I have few more free Fridays to refine it and make it a good talk in which to show my view on how to achieve good code and why would you want that. I hope to see you there!</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Global Day of Code Retreat ]]>
            </title>
            <description>
                <![CDATA[ On December the 8th, we had the Global Day of Code Retreat in Cluj-Napoca. We have joined the other 150 cities all over the world in a great global event.


My first contact with a code retreat was almost a year ago when some friends from iQuest invited me to ]]>
            </description>
            <link>https://oncodedesign.com/blog/global-day-of-code-retreat/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b76</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 18 Dec 2012 00:10:12 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>On December the 8<sup>th</sup>, we had the <a href="globalday.coderetreat.org">Global Day of Code Retreat</a> in Cluj-Napoca. We have joined the other <a href="https://maps.google.com/maps/ms?msid=211858429594081017615.0004c84674c62aa900e06&msa=0&ie=UTF8&t=h&ll=38.272689,-7.734375&spn=164.696857,90&z=1&source=embed">150 cities all over the world</a> in a great global event.</p>
<p>My first contact with a code retreat was almost a year ago when some friends from <a href="http://www.iquestgroup.com/?ref=oncodedesign.com">iQuest</a> invited me to join them for a code retreat organized internally. I loved it. I liked the idea, the format, and the most the fact that I had the opportunity to pair with developers I didn’t know and to experience different programming languages. Since then I had the idea to organize with <a href="http://www.rabs.ro/?ref=oncodedesign.com">RABS</a> a large code retreat with a lots of developers. When somewhere in October, I heard that there is going to be a global day of code retreat, I said that this is the best opportunity to organize it, so I’ve registered Cluj, among the <a href="https://maps.google.com/maps/ms?msid=211858429594081017615.0004c84674c62aa900e06&msa=0&ie=UTF8&t=h&ll=38.272689,-7.734375&spn=164.696857,90&z=1&source=embed">other locations in the world</a>. I wanted diversity: many programming languages and many developers with different experiences and views. We’ve succeeded: we had 70 developers who programmed in 8 languages: C#, Java, C, C++, javascript, Ruby, Python and F#.</p>
<p>I needed help. I’ve asked Andrei Olar, who facilitated the code retreat at <a href="http://www.iquestgroup.com/?ref=oncodedesign.com">iQuest</a> to take over the facilitator role. He gladly accepted. Knowing we are going to be more than 50, we have also asked Costin Morariu who facilitated more code retreats, to be another facilitator. Even if he lives in Sibiu, he said yes. I was the third facilitator and <a href="http://www.sese.ro/?ref=oncodedesign.com">Sergiu</a> took over the host responsibilities. We decided at <a href="http://www.rabs.ro/?ref=oncodedesign.com">RABS</a> that we are going to organize it, but we wanted to have an event for all developers, regardless the technology or experience, so we asked the major programmers communities in Cluj: <a href="http://codecamp.ro/?ref=oncodedesign.com">Code Camp</a>, <a href="http://www.transylvania-jug.org/?ref=oncodedesign.com">JUG</a>, <a href="http://clujrb.org/?ref=oncodedesign.com">Cluj.rb</a> and <a href="http://ronua.ro/?ref=oncodedesign.com">RONUA</a> to join us as co-organizers. They all accepted our invitation, so we’ve published the event and started registrations. We’ve chosen <a href="http://clujcowork.ro/?ref=oncodedesign.com">Cluj Cowork</a> to be our host. It was perfect for such event (lot of space and more rooms). After a few days we had over 50 people registered. When we’ve reached 80, the maximum capacity, we’ve opened a waiting list. In the end we had 135 registrations. Part of them announced us that they cannot make it and leave the seat to someone else. From 85 confirmed registrations, 70 showed up. Given the large number of people we splat from the beginning in three groups: red team, blue team and green team. Each team had its own facilitator, its own rooms, and basically we had three code retreats which ran in parallel. The facilitators consulted regularly to do the same exercises in each session so we allow people to switch teams after lunch.</p>
<p>The first session was to allow everyone to get familiar with the problem. We also asked them to write tests. Given the fact that most of the participants were not familiar with the format we have chosen <a href="http://en.wikipedia.org/wiki/Conway's_Game_of_Life?ref=oncodedesign.com">Conway Game of Life</a>. The second session asked for TDD, but we did not make anyone to change the pair if he didn’t want to. The third session was also about TDD and pair programming, but now anyone had to pair with someone else. At lunch we were already counting around 7 programming languages. After lunch we started with a ping pong TDD session, meaning that one starts writing the failing test and his/her pair makes it pass and writes the next failing test and then they switch again. The last session was a mute session, no talking with the pair. This forced everyone to make their code very expressive. We had some pairs who also tried the ‘no-ifs’ exercise for the last session.</p>
<p>I have facilitated for the blue team. It was the first time for me as a facilitator and a great experience. The advices from Andrei and Costin helped a lot. Also of great help was the facilitator training registered by <a href="https://twitter.com/jthurne?ref=oncodedesign.com">Jim Hurne</a>, which I watched a day before on the plane and in the airport. Thanks guys! A large part of the participants didn’t try TDD before, so during the second and third session I insisted that the three steps of TDD are done in very short cycles. I remember that at the beginning of the third session when everyone had to switch the pair, there was a lot of loudly arguing in each pair. Everyone knew a different way to solve it. They all did it two times before. I went down stairs for a few minutes and when I’ve returned it was quite again. They’ve reached the common ground. Pair programming worked. In the last session I had a pair of three who were intrigued by the no-ifs exercise and wanted to try it instead of the mute session. They almost did the implementation in 45 minutes. At the closing circle everyone shared what they’ve learned. If I had asked the same questions at the beginning of the day, I am sure I would have got very different answers then the ones at the closing circle. This proves that it was a good experience for everyone. It was a nice reward, to see developers happy because they have done TDD and pair programming and that they have learned something. Some did this for the first time.</p>
<p>Most of the participants asked to have more code retreats. There were many suggestions to have one each month or one every two months. Given the interest I’ve seen in Cluj for this I think we are going to organize it again and again, but next time I will be also coding. Andrei and Costin said the same, they also want to code in the next code retreat. We all three were a bit jealous on the ones who were having fun coding and experimenting new approaches. So, who wants to facilitate the next code retreat in Cluj?</p>
<p>In the end, I want to thank to everyone who contributed to this great day starting with each participant and ending with <a href="http://coreyhaines.com/?ref=oncodedesign.com">Corey</a> who started it. I had a great experience. Thanks!</p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/GDCR12-Cluj1.jpg" alt="GDCR 1" loading="lazy"></p>
<p><img src="https://storage.ghost.io/c/05/b4/05b4dc35-3b1c-4445-b4b7-404711104301/content/images/2017/01/GDCR12-Cluj2.jpg" alt="GDCR 2" loading="lazy"></p>
<p>You can see some more pictures on our <a href="http://www.facebook.com/media/set/?set=a.503123126388686.120565.194943400539995&type=3">facebook page</a>.</p>
<p>P.S. Special thanks to my girlfriend <a href="https://twitter.com/GeorgianaJora?ref=oncodedesign.com">Georgiana </a>and my brother <a href="https://twitter.com/mihaicoros?ref=oncodedesign.com">Mihai</a> for helping me with the organising work.</p>
<p>P.P.S. A big thanks to our local sponsors: <a href="http://www.zencash.com/?ref=oncodedesign.com">ZenCash.com</a>, <a href="http://www.isdc.eu/?ref=oncodedesign.com">ISDC</a>, <a href="http://www.accesa.eu/?ref=oncodedesign.com">accesa</a>, TSE Development Romania, <a href="http://www.endava.com/?ref=oncodedesign.com">Endava</a>, <a href="http://www.kno.com/?ref=oncodedesign.com">Kno</a> and <a href="http://www.ullink.com/?ref=oncodedesign.com">ULLINK</a>.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Mirrored Unit Tests ]]>
            </title>
            <description>
                <![CDATA[ I’m not sure if the term of Mirrored Unit Tests actually exists, but I’ll explain in this post what I mean by it.


You get this situation when you have a class under test, which has symmetric public methods, which change the internal state of the class and ]]>
            </description>
            <link>https://oncodedesign.com/blog/mirrored-unit-tests/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76bac</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Mon, 05 Nov 2012 01:03:47 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I’m not sure if the term of <em>Mirrored Unit Tests</em> actually exists, but I’ll explain in this post what I mean by it.</p>
<p>You get this situation when you have a class under test, which has symmetric public methods, which change the internal state of the class and have logic that depends on the fact that one method was called before or not. To test this for one method, you will have to have tests for scenarios in which the other method was called or not. Then you will have to rewrite the same tests for the other method, in a very similar way, like in a mirror. From where the term: <em>Mirrored Unit Tests</em>. No matter how much you will try to remove duplicate code by extracting it into helper methods, your test code will get messy.</p>
<p>The solution, is an abstraction (of course <span style="font-family:Wingdings;">J</span> ). You make an abstract test class, in which you implement the tests for one method. Then you implement the abstraction for each method you wanted to test and you override the protected members that abstract the method under test. With this test design you get rid of the duplication and it works nicely with most of the unit testing frameworks.</p>
<p>Below I will detail this more, by giving an example.</p>
<p>I have a class which is used in an implementation which builds a lambda expression, from some other representation of code (less important for this example). The class looks like this:</p>
<p>[sourcecode language=”css”]<br>
class BinaryExpressionBinder<br>
{<br>
private Expression left;<br>
private Expression right;</p>
<p>public Expression ResultedExpression { get; private set; }</p>
<p>public void NotifyForRightNode(Expression rightExpr)<br>
{<br>
bool constantExpr = TrySetAsConstantExpression(rightExpr, LeftWasSet);</p>
<p>if (!constantExpr)<br>
{<br>
right = rightExpr;<br>
SetFieldExpression();<br>
}<br>
}</p>
<p>public void NotifyForLeftNode(Expression leftExpr)<br>
{<br>
bool constantExpr = TrySetAsConstantExpression(leftExpr, RightWasSet);</p>
<p>if (!constantExpr)<br>
{<br>
left = leftExpr;<br>
SetFieldExpression();<br>
}<br>
}</p>
<p>private bool LeftWasSet<br>
{<br>
get { return left != null; }<br>
}</p>
<p>private bool RightWasSet<br>
{<br>
get { return right != null; }<br>
}</p>
<p>private void SetFieldExpression()<br>
{<br>
if (LeftWasSet &amp;&amp; RightWasSet)<br>
{<br>
BinaryLogicalExpression be = new BinaryLogicalExpression<br>
{<br>
Left = left,<br>
BooleanOperator = op,<br>
Right = right<br>
};<br>
ResultedExpression = be;<br>
}<br>
}</p>
<p>private bool TrySetConstantExpression(Expression memberExpr, bool otherMemberWasSet)<br>
{<br>
// some logic which is not relevant for this sample.<br>
// This logic may set the result with a constant expression<br>
}<br>
}<br>
[/sourcecode]</p>
<p>The important thing to notice is the two public methods: <span style="color:black;font-family:Consolas;font-size:9pt;"><span style="background-color:white;">NotifyForRightNode</span>() </span>and<span style="color:black;font-family:Consolas;font-size:9pt;"><br>
<span style="background-color:white;">NotifyForLeftNode</span>()</span>. They are symmetric: both in signature and also in implementation. They try to set the result as a constant expression. If that fails it remembers the notification argument and if the notification for the other node was called, then a binary expression is set as result. It is important to remember that the binary expression can be built only when both members are known.</p>
<p>Now let’s see how to test this. I will need to write unit tests against the public interface and not to relay at all on the implementation details. In this case I would need to write tests for <span style="color:black;font-family:Consolas;font-size:9pt;"><span style="background-color:white;">NotifyForRightNode</span>() </span>with at least the following scenarios: left node was not set yet, and a constant expression can or (other scenario) cannot be built; left node was already set and a constant expression can or (other scenario) cannot be built. So at least four scenarios for <span style="color:black;font-family:Consolas;font-size:9pt;"><span style="background-color:white;">NotifyForRightNode</span>()</span>. Now the same tests have to be for the other method<span style="color:black;font-family:Consolas;font-size:9pt;"><br>
<span style="background-color:white;">NotifyForLeftNode</span>()</span>, but in mirror (configuring that right node was set or not). As I said above, I will write the tests only for one method, in an abstract test class and then I will override just to specify the method under test.</p>
<p>[sourcecode language=”css”]<br>
public abstract class BinaryExpressionBinderTests<br>
{<br>
protected abstract void NotifyForNodeUnderTest(Expression expr);<br>
protected abstract void NotifyForOtherNode(Expression expr);</p>
<p>protected BinaryExpressionBinder Binder;</p>
<p>[TestMethod]<br>
public void NotifyForNode_OtherNodeWasNotifiedAndConstantExpressionCannotBeBuild_ResultIsBinaryExpression()<br>
{<br>
//arange<br>
Binder = ConfigureBuilder();<br>
Expression dummyExpr = GetDummyExpression();<br>
NotifyForOtherNode(dummyExpr);</p>
<p>//act<br>
NotifyForNodeUnderTest(dummyExpr);</p>
<p>//assert<br>
Expression actual = Binder.ResultedExpression;<br>
Assert.IsInstanceOfType(actual, typeof(BinaryExpression));<br>
}</p>
<p>// other tests …<br>
}</p>
<p>[TestClass]<br>
public class BinaryExpressionBinderTestsForLeftNode : BinaryExpressionBinderTests<br>
{<br>
protected override void NotifyForNodeUnderTest(Expression expr)<br>
{<br>
Binder.NotifyForLeftNode(expr);<br>
}</p>
<p>protected override void NotifyForOtherNode(Expression expr)<br>
{<br>
Binder.NotifyForRightNode(expr);<br>
}<br>
}</p>
<p>[TestClass]<br>
public class BinaryExpressionBinderTestsForRightNode : BinaryExpressionBinderTests<br>
{<br>
protected override void NotifyForNodeUnderTest(Expression expr)<br>
{<br>
Binder.NotifyForRightNode(expr);<br>
}</p>
<p>protected override void NotifyForOtherNode(Expression expr)<br>
{<br>
Binder.NotifyForLeftNode(expr);<br>
}<br>
}<br>
[/sourcecode]</p>
<p>So, less code, lower maintenance costs. It wouldn’t be right to test only one method, even though the VS (or other measuring tool) would say that I have a good coverage. The good coverage comes only because my implementation avoids duplicates and calls helper methods. What I want is to have a good coverage and this not to depend on the implementation. When I will refactor the production code, my tests have to stay green and valid without the need of being adjusted. Because that’s how good tests are!</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Good Unit Tests Ask For Quality Code ]]>
            </title>
            <description>
                <![CDATA[ I’ve talked this weekend at Code Camp Cluj about good unit tests and quality code.


Here are my slides and  here is the code I’ve demoed. ]]>
            </description>
            <link>https://oncodedesign.com/blog/good-unit-tests-ask-for-quality-code/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76bad</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Sat, 24 Mar 2012 14:18:45 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I’ve talked this weekend at Code Camp Cluj about good unit tests and quality code.</p>
<p><a href="http://40.114.216.218/wp-content/uploads/2012/03/high-quality-code-design-driven-by-good-unit.pptx?ref=oncodedesign.com">Here </a>are my slides and  <a href="https://skydrive.live.com/redir.aspx?cid=90d40a51822669db&resid=90D40A51822669DB!120&parid=90D40A51822669DB!118&ref=oncodedesign.com">here</a> is the code I’ve demoed.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ ReSharper Templates for Unit Tests ]]>
            </title>
            <description>
                <![CDATA[ I am sharing here my resharper templates that I use. You should be able to easily import them.


The most useful one is the TestMethod template:


[TestMethod]  
public void MethodUnderTest_Scenario_ExpectedBehaviour()  
{
     Assert.Fail(&quot;Not yet implemented&quot;);  
}



It follows the unit test naming convention proposed by Roy Osherove ]]>
            </description>
            <link>https://oncodedesign.com/blog/resharper-templates-for-unit-tests/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b78</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Thu, 22 Mar 2012 23:15:06 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I am sharing <a href="https://skydrive.live.com/redir.aspx?cid=90d40a51822669db&resid=90D40A51822669DB!119&parid=90D40A51822669DB!118&ref=oncodedesign.com" title="here">here</a> my resharper templates that I use. You should be able to easily import them.</p>
<p>The most useful one is the <code>TestMethod</code> template:</p>
<pre><code class="language-language-csharp">[TestMethod]  
public void MethodUnderTest_Scenario_ExpectedBehaviour()  
{
     Assert.Fail(&quot;Not yet implemented&quot;);  
}
</code></pre>
<p>It follows the unit test naming convention proposed by <a href="http://osherove.com/blog/?ref=oncodedesign.com">Roy Osherove</a> in his book “<a href="http://artofunittesting.com/?ref=oncodedesign.com">The Art Of Unit Testing</a>”. Naming your unit tests like this it helps you in writing good unit tests, because it foces you to keep your tests simple and test only one thing. If you cannot name the test method like this either you’re testing more than one thing, either your test is too complex, so you cannot name the test scenario. If you’re in the last case consider to refactor your production code.</p>
<p>Another great advantage I’ve found by following Roy’s convention is that I can easily know, just by looking at the test run report, where the bug is when a test fails. This way I reduce my debug time to almost zero for the code under unit tests and this gives me a higher efficiency.</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
        <item>
            <title>
                <![CDATA[ Starting... ]]>
            </title>
            <description>
                <![CDATA[ I’m starting a worldpress blog. I’m not sure how much inspiration will I have to post here articles. We’ll see…


I am starting this, thinking to have a place where I can easily share experiences, thoughts or any kind of other stuff…. ]]>
            </description>
            <link>https://oncodedesign.com/blog/starting/</link>
            <guid isPermaLink="false">67aa20ea0d31e00001b76b97</guid>
            <category>
                <![CDATA[  ]]>
            </category>
            <dc:creator>
                <![CDATA[ Florin Coros ]]>
            </dc:creator>
            <pubDate>Tue, 20 Mar 2012 22:26:53 +0200</pubDate>
            <media:content url="" medium="image" />
            <content:encoded>
                <![CDATA[ <!--kg-card-begin: markdown--><p>I’m starting a worldpress blog. I’m not sure how much inspiration will I have to post here articles. We’ll see…</p>
<p>I am starting this, thinking to have a place where I can easily share experiences, thoughts or any kind of other stuff….</p>
<!--kg-card-end: markdown--> ]]>
            </content:encoded>
        </item>
    </channel>
</rss>