Thursday, May 28, 2009

Microsoft .NET Enablement: Analysis and Cautions

Analysis:As discussed earlier in this series, to develop and deploy a Web service-connected information technology (IT) architecture, several elements are required: smart clients, servers to host Web services, development tools to create them, and applications to use them. The Microsoft .NET Framework includes these elements, as well as a worldwide network of more than 35,000 Microsoft Certified Partner organizations to provide any help users might need. For a definition of how the Microsoft .NET environment addresses the situation, see Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement.

Part Three of the series Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement.

For a general discussion of the evolution of system architecture, see Architecture Evolution: From Mainframes to Service-oriented Architecture.

Only a few innovative or brave (or both) Microsoft-centric vendors have embarked on a gut-wrenching (but potentially rewarding) effort to deliver brand new software written in managed .NET Framework-based code, where many of the basic system tasks are removed from the code and "managed" by the .NET Framework. This functionality, which has been completely rewritten, or newly created using only the .NET Framework, can then be used and accessed through Web services, as with the examples of .NET-enabled ("wrappered") counterpart software cited in Examples of Microsoft .NET Enablement.

Cautions:However, if history helps us predict the future, it is awfully difficult to effectively execute this strategy of transforming software frameworks, and only the most resourceful or steadfast vendors are tipped as winners in the long run. Thus, one should be aware of how technology might develop in the future, while conducting the alignment of business and IT functions. Across the application life cycle, the high cost of development, support, and enhancements in terms of money, time, and quality limit the ability of installed legacy software to meet many demands of business. Other possible stumbling blocks might come from the facts that legacy functionality may not be accessible for modifications, and that such environments might require an additional layer of code to be developed and maintained for the wrapper (whereby the programmer must continue to manage system tasks). Also, this technology leap might not be positioned well for future Microsoft technology advances, such as Windows Vista.

Once other proprietary technologies are introduced into the research and development (R&D) equation, any vendor has to deal with translation, interface, and performance issues, not to mention the pain of migrating or keeping existing customers up to date, or maintaining multiple product versions. In fact, some laggard vendors even see the service-oriented architecture (SOA) and Web services bandwagon as an opportunity to portray legacy systems as "a treasure trove" of software assets. Although this might hold true for some applications where there is no business justification to "reinvent the wheel" (in other words, to duplicate what already exists, by developing a new payroll or general ledger system, for instance), users and vendors should make a rigorous effort to sort through this treasure trove and separate the "diamonds" from the "rhinestones." They will have to conduct a thorough discovery process to find out what functional assets they really have, and then make some tough decisions about what to keep, what to modernize, and what to throw away, since the code that is kept will form the foundation of a functional repository of services that will be used for years to come.

Many vendors, especially those with some longevity in the market (and even mainframe roots), like to create the belief that under SOA, no old code is bad old code. But the truth can be quite different. Some legacy systems have been around for forty years or more, and even though they may still be working, and doing something useful, not all of them are worth keeping as is. In many circumstances, companies can get away with wrappering as a temporary measure. Eventually though, and contrary to what many vendors say, both vendors and user enterprises will be forced to modernize and transform much of their legacy code. For more information, see Rewrite or Wrap-Around Old Software?.

To that end, in fact, Epicor Vantage is an example of an application that is essentially positioned between the .NET-enabled approach and the rewrite in pure .NET managed code (which is the next evolutionary step, as will be explained shortly). That is, around 60 percent of Vantage is in .NET-managed code (in other words, a C#-based smart client, extensibility tools, customization, and so on), and all business logic is exposed as Web services (not wrappered, but rather Web services generated from Progress OpenEdge). In the rewrite effort, Epicor recreated much of the business logic in a far more componentized and granular way, in order to support Web service calls. For instance, the checking capability to promise (CTP) call that Vantage users require cannot operate properly without the .NET Framework.

Certainly, the presence of Progress means that Vantage is not completely in .NET-managed code. However, with Vantage Epicor has not planned to be 100 percent .NET, but rather 100 percent SOA. The vendor has simply used .NET for the majority of the solution where it made sense, such as for client-side dynamic link library (DLL) management, and provided a standardized Epicor Portal platform based on the "latest and greatest" Microsoft SharePoint technologies (see Epicor To Give All Its Applications More Than A Pretty Facelift). When the vendor did not use .NET, it was to ensure choice and flexibility for customers on the operating system (OS) and database side. Namely, pure .NET throughout means only a Microsoft stack, whereas Epicor can support Microsoft and Linux/Unix OS, and Microsoft SQL Server, Progress, and Oracle databases (Oracle is not currently supported, but the intent is to support it in the future). This decision was important for supporting larger customers who rightly or wrongly maintain a perception that the Microsoft platforms cannot scale.

SYSPRO's design has also been along similar lines, and the SYSPRO Reporting Services (SRS), Analytics, and web applications mentioned in Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement were all written using .NET-managed code, whereas the SYSPRO Cyberstore's .NET-enabled capability is also featured in the SYSPRO BarCode and SYSPRO Warehousing Management System (WMS) systems, which are all integrated totally to the core enterprise resource planning (ERP) system via the .NET Framework.

Is .NET-managed the Right Way? :However, .NET-managed software products are built entirely of homogenous .NET "managed code" components, meaning without any wrappers. In other words, managed .NET code is code that has its execution managed by a .NET virtual machine, such as the .NET Framework Common Language Runtime (CLR). Managed refers to a method of exchanging information between the program and the run-time environment, or to a "contract of cooperation" between natively executing code and the run time. This contract specifies that at any point of execution, the run time may stop an executing central processing unit (CPU) and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to run-time state, such as register or stack memory contents.

The necessary information is thereby encoded in a Microsoft Common Intermediate Language (MCIL or MSDIL) and associated metadata, or in symbolic information that describes all of the entry points and the constructs exposed in the MCIL (such as methods and properties) and their characteristics. The common language infrastructure (CLI) standard (whereby the CLR is the primary Microsoft commercial implementation) describes how the information is to be encoded, and programming languages that target the run time emit the correct encoding. All a developer has to know is that any of the languages that target the run time produce managed code emitted as portable executable (PE) files that contain MCIL and metadata.

As emphasized earlier on, there are many such languages to choose from, since there are more than thirty languages provided by third parties—everything from COBOL to Camel, in addition to Visual C#, J#, VB.NET, Jscript, and C++ from Microsoft. The CLR includes the Common Language System (CLS), which sets the rules and regulations for language syntax and semantics, as well as the Common Type System (CTS), which defines the data types that can be used. Because all programs use the common services in the CLR, no matter which language they were written in, such applications are said to use "managed code." In a Microsoft Windows environment, all other code has come to be known as "unmanaged code," whereas in non-Windows and mixed environments, managed code is sometimes used more generally to refer to any interpreted programming language.

Before the managed code is run, the MCIL is compiled into native executable (machine) code. Furthermore, since this compilation happens through the managed execution environment (or more correctly, by a just-in-time [JIT] compiler that knows how to target the managed execution environment), the managed execution environment can make guarantees about what the code is going to do. It can insert traps and appropriate garbage collection hooks, exception handling, type safety, array bounds and index checking, and so forth. For example, such a compiler makes sure to lay out stack frames and everything "just right," so that the garbage collector can run in the background on a separate thread, constantly walking the active call stack, finding all the roots, and chasing down all the live objects. In addition, because the MCIL has a notion of type safety, the execution engine will maintain the guarantee of type safety, eliminating a whole class of programming mistakes that often lead to security holes.

This is traditionally referred to as JIT compilation, although unlike most traditional JIT compilers, the file that holds the pseudo machine code that the virtual machine compiles into native machine code can also contain pre-compiled binaries for different native machines (such as x86 and PowerPC). This is similar in concept to the Apple Universal binary format, which is perceived by many as the system that "never crashes." Conversely, unmanaged executable files are basically a binary image, x86 code, loaded into memory, whereby the program counter gets put there, and that is the last the OS knows. There are protections in place around memory management and port I/O and so forth, but the system does not actually know what the application is doing. Therefore, it cannot make any guarantees about what happens when the application runs.

This means that .NET-managed software should benefit from the many performance and security advantages of .NET-managed code, since the CLR handles many of the basic tasks that were previously managed by a programmer in the application code, including security checks and memory management. .NET-managed products will also likely more smoothly run as "native code" on Microsoft Vista and future Microsoft OS and technology advances, providing another important advantage. Finally, .NET developers that have experience with managed code will confirm that this new programming paradigm allows them to develop and extend applications in record time and with significant improvement in quality. This is owing to the ability to create new "leaner" software with significantly fewer lines of code, which runs natively on .NET Framework.

Using technologies that are intrinsically compatible should result in faster and less costly development. As a result, any application suite, once it has been completely rewritten in the Microsoft .NET-managed code framework, should not have to contend with technology conflicts, trade-offs, or inefficiencies resulting from mixing or wrapping technologies. In contrast, any vendor that covers multiple platforms often spends more than half its R&D budget on porting issues; thus, a cross-platform solution remains largely the prerogative (and consequent burden) of only bigger vendors.

1 comment:

  1. I want to talk about the security of .net.I know that .NET has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. CAS is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the intranet or Internet). CAS uses evidence to determine the permissions granted to the code. Other code can demand that calling code be granted a specified permission. The demand causes CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.

    ReplyDelete