Improving the Reach and Manageability of Microsoft® Access 2010 Database Applications with Microsoft® Access Services

Improving the Reach and Manageability of Microsoft® Access 2010 Database Applications with Microsoft® Access Services

January, 2010

 

 

Contents

Introduction    1

Empowering End Users with Access    1

Benefits    1

Manageability Challenges    2

Meeting Manageability Challenges by Centralizing Storage    3

Managing Split Access Applications    3

Moving Data to SQL Server    3

Using Terminal Services to Deploy Access Applications    4

Increasing Manageability with SharePoint    4

Publishing Access 2010 Databases to Access Services    6

Web Access, Better Performance and Enhanced Manageability    6

Web Databases, Web Objects, and Client Objects    7

Deploying Access databases in SharePoint    9

Storing databases into SharePoint Document Libraries    9

Publishing an Access Services application    9

Publishing Client only Applications    10

Hosted SharePoint Options    10

Migrating Legacy Data to Web Tables    11

Using the Web Compatibility Checker    11

Handling Compatibility Issues    11

Creating New Compatible Tables and Importing Legacy Data    13

Synchronizing data between web tables and external sources    13

Migrating Legacy Application Objects    15

Handling Publishing, Compilation, and Runtime Errors    15

Publishing Issues    15

Compilation Issues    16

Runtime Issues    16

Upgrading Databases to Access 2010    17

64-bit VBA Issues    17

Summary    17

Appendix    18

Access 2010 Features by Object Type    18

Tables    18

Forms    19

Reports    21

Queries    22

Macros    23

Expressions    26

 

Introduction

Microsoft Access empowers end users in business units to develop applications quickly and at low cost. This agility is valued by large organizations around the world. However, some of these organizations struggle with managing the myriad Access databases in use. With Microsoft Access 2010 and Microsoft SharePoint 2010 Products working together, the best of both worlds is possible: you can satisfy the need for agile development from your business units and still rest assured that the data is secured, backed up and managed appropriately.

Access Services is a new feature of Microsoft® SharePoint Server 2010 Enterprise Edition that supports close integration with Microsoft® Access 2010. This integration enables users to extend the ease of Access application development to the creation of Web forms and reports, and it enables IT managers to extend the ease of SharePoint -2010 Products administration to the management of Access data, application objects, and application behavior. This paper explains the benefits and architecture of this new level of integration, and it provides technical details that will be helpful in implementing successful migration of existing Access applications to this new architecture.

 

Empowering End Users with Access

Access has long been one of the most popular desktop tools for building database applications, because it empowers end users to accomplish tasks that would otherwise require the services of IT professionals.

The easy-to-use graphical interfaces and wizards in Access enable users to create related tables for data storage, as well as links to external data, and to build forms and reports for entering, manipulating, and analyzing data. Unlike Microsoft Excel, Access is built on a relational engine and is therefore inherently optimized for data validation, referential integrity, and quick, versatile queries.

Benefits

The ever-evolving data management needs of an organization cannot all be met by trained programmers and database administrators, whose time is costly and limited. End users frequently can’t wait for these resources to become available or can’t justify the expense. So they turn to alternatives, from manual notes to spreadsheets, often limiting their productivity and effectiveness. When users discover Access and invest just a short time learning to use it, they are usually delighted to see how much they can accomplish.

Access empowers users to gather information from many disparate sources, including spreadsheets, HTML tables, SharePoint lists, structured text files, XML, Web services, or any ODBC data source such as Microsoft SQL Server, Oracle, or MySQL. Depending on the source and the user’s permissions, this data can even be updated from Access, and can be combined in distributed queries.

As project requirements and user skills evolve, Access applications can become increasingly complex and full-featured. Users go online and easily find a rich ecosystem of help and other resources available, including tips and samples that have accumulated since Access was first released in 1992.

By satisfying user needs without burdening IT resources, Access often plays a valued role in meeting an organization’s data management needs.

Manageability Challenges

The vast majority of Access applications never come to the attention of the IT department. However, a percentage of Access databases do create problems that draw the attention of IT managers. Access data and application files may be lost or corrupted. Long-running queries can burden network or server resources. Sensitive data can inadvertently be left unprotected. Performance can degrade as more data and users are added. And departments can come to depend on applications developed by people who are no longer available or whose skills are inadequate to meet current requirements.

IT managers sometimes take the position that Access should be prohibited to avoid these issues. However, users either defy the ban or revert to alternatives such as spreadsheets that are no more manageable or secure. The needs that drive Access adoption don’t go away and can’t all be met by IT-mediated application development.

Most IT departments eventually conclude that the best approach is to provide guidance and management that encourages users to leverage the capabilities of Access safely and productively. This management and guidance can take several forms, including distribution of templates and samples that encourage best practices.

In addition, centralizing storage of Access data and application files can help improve reliability, security, and manageability, without unduly inconveniencing users. Centralizing data storage can also enable Access applications to scale to serve many users. The remainder of this article discusses several options for centralizing storage of Access data and applications, with an emphasis on the new capabilities provided by integrating Access 2010 and SharePoint Server 2010.

Meeting Manageability Challenges by Centralizing Storage

As summarized below, a range of options have emerged over the years for centralizing the storage of Access data and application objects, including storage of files on managed network shares or on terminal servers, migration of data to database servers such as SQL Server, migration of data to SharePoint lists, and storage of applications in SharePoint document libraries. Access Services introduces a new set of options that not only extend Access application to the Web, but also significantly enhance manageability.

Managing Split Access Applications

Access is distinctive in its ability to combine a query processor and storage engine with an application development environment and an application runtime framework. A single Access database file can contain both data tables and application objects stored in system tables. However, Access users commonly split the storage of application objects, such as forms and reports, from the storage of user data.

Especially in applications that are shared by multiple users, the most manageable pattern is to place only the user data tables in one Access database, usually called the back end, on a file share. All the other objects, including forms, reports, VBA code modules, Access macros, and saved queries, are stored in a separate database file, which also contains links to the data tables. Multiple copies of this front-end file are distributed to users for storage on their local drives. This allows front-end objects to be updated without disturbing the data, and local storage of these objects provides better performance than sharing a single remote copy of the front end.

Microsoft has encouraged this pattern of deployment by providing a wizard that automatically creates a second database file, exports all local tables to the new database, and creates links to the exported tables. Any application object that had worked with the local tables automatically then work with the new links to exported tables.

However, the manageability of these database files is limited. Users must have full permissions to create and delete files on the data share, and database files can proliferate uncontrollably. It is not uncommon for enterprises to discover that tens of thousands of Access database files are scattered across their networks.

In addition, collaborative design work is difficult for users to manage. When multiple users have their own copies of a front-end application file, it is difficult for them to receive a new version without losing any customizations they may have made.

Moving Data to SQL Server

Once data tables have been separated from application objects, it requires little work to migrate those tables to a SQL Server database and change the links to ones that use ODBC to access the data. Access includes an “upsizing” wizard for exporting tables to SQL Server and linking to them, and SQL Server also provides a tool call the SQL Server Migration Assistant for Access (http://www.microsoft.com/downloads/details.aspx?FamilyID=d842f8b4-c914-4ac7-b2f3-d25fff4e24fb&DisplayLang=en).

This improves reliability, scalability, and security. However, users require privileges and training to create, maintain, and administer SQL Server databases. Some IT organizations restrict the number of users who have such privileges.

Using Terminal Services to Deploy Access Applications

Another option is to centralize Access applications on terminal servers. This provides the significant benefit of allowing users to access their applications across a wide area network or the Internet, while maintaining good performance. IT managers have better control over backup, reliability, and security for those applications.

Users can’t get to their applications from any browser on any device. Using Terminal Services works well for intranet deployments of canned Access applications, but is less useful for supporting ad hoc user-generated solutions.

 

Increasing Manageability with SharePoint 2010 Products

SharePoint 2010 Products are architected to support safe and scalable creation of thousands of sites and lists by minimally privileged and minimally trained users. In addition, SharePoint Server is highly manageable. It has a security model that is tightly integrated with Active Directory. Data backups are assured, and a multi-level recycle bin provides easy recovery of deleted data items. A highly scalable architecture supports handling increased loaded by adding servers. Plus, SharePoint 2010 Products are engineered to provide highly configurable activity throttles to protect server and network resources, while still supporting end-user creation of new applications and new content.

As a Web-based platform that employs standard Internet protocols, SharePoint 2010 Products enable users to access their applications from any browser on any device. Users are often delighted to learn how easy it is for them to collaborate over the Web with SharePoint 2010 Products. This ease of use, combined with IT-friendly manageability has made it the fastest growing server product in the history of Microsoft.

For all these reasons, the case for integrating Access and SharePoint 2010 Products is strong, and with each of the last three versions of the products that integration has deepened.

 

A Brief History of Access/SharePoint Products and Technologies Integration

Access 2003 introduced integration with Microsoft SharePoint Portal Server 2003 and Windows SharePoint Services by adding an Installable ISAM driver that enabled the Jet database engine, the engine used by Access 2003, to connect to SharePoint lists. This allowed Access users to view and edit SharePoint data and to create queries that join SharePoint data to data from other sources.

Access 2007 added significant new support for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services by enabling users to take list data offline and then synchronize with the server when they reconnect. To accomplish this, the Access team branched off a proprietary version of the Jet database engine, renamed ACE. They added new data engine features to provide parity with SharePoint data types, including support for file attachments and complex multi-valued columns. New Access UI made it easier to move data to SharePoint lists, and SharePoint Products and Technologies also added UI features to support working with Access applications stored in document libraries.

With Access 2007, users had a more seamless experience of working with SharePoint lists, but those lists still lacked full suitability for use in most Access applications. Performance was often slow and important database features, such as referential integrity and enforcement of validation rules, couldn’t be implemented without resorting to complex Workflow programming on the computer that is running Office SharePoint Server or Windows SharePoint Services.

Access 2010 and SharePoint 2010 Products address these shortcomings. Performance issues have been eliminated through server-side and client-side data caching, as explained below. Referential integrity and the basic expected forms of data validation are now enabled on the server without requiring any custom programming. For more advanced validation, users can easily use Access macros to create server-based Workflow customizations.

In addition, Access 2010 offers exciting new ways of integrating with SharePoint 2010 Products that allow users to run Access applications using only a Web browser. These new capabilities, based on Access Services running on SharePoint Server 2010, require Enterprise CAL licensing, but economical hosted solutions are or will soon be made available from Microsoft and third parties for organizations that don’t have in-house installations.

 

Publishing Access 2010 Databases to Access Services

Access 2010 introduces the ability to publish a database to Access Services, which creates a SharePoint site for the application. Any local tables are moved to SharePoint lists. If any of the data can’t be moved into a list, then publishing cannot happen until the data is exported to a separate database or changed to be compatible. A compatibility checker supports this by listing any problems that would prevent publishing.

After a database is published, it becomes a Web database, meaning that users can add Web forms, reports, queries and macros that can execute on the computer that is running SharePoint Server when the application runs in a browser or on the client when the application runs in Access. Users can browse to Web forms and reports over the Internet, or they can run them in the Access client.

Published databases can also contain objects, with a fuller feature set, that only run in the Access client. Tables linked to external data, such as data in other Access databases, Excel spreadsheets, SQL Server tables, or even in SharePoint lists on other sites, are only available in the Access client, not to Web forms and reports. All design work occurs in the Access client.

Publishing Access databases to Access Services on SharePoint Server, rather than simply saving them in SharePoint document libraries, provides three key advantages:

●    Published applications can contain forms and reports that are enabled to run in a browser as well as in the Access client

●    Published applications are stored and synchronized with greater granularity and efficiency than applications in document libraries

●    Published applications are more manageable than applications stored in document libraries

Web Access, Better Performance and Enhanced Manageability

Making forms and reports available over the Web provides a key advantage. In today’s distributed workforce, being able to collaborate with colleagues all around the world is critical. Increasingly, users are looking for a no-install solution for collaboration that can work with varied bandwidth and varied proximity to data. Web applications also enable users to work with Access applications without being distracted by tools for customizing the application, a benefit that has in the past often required professional programming services.

In published databases, individual Access objects are serialized and saved in a hidden SharePoint list. This is similar to the way that programming objects are saved in source control systems. When users choose to open published applications in the Access client, rather than in a browser, the local version of the database is synchronized with the version on the server. Synchronization affects all the objects in the database, not just the data. Because objects are stored as individual data items, SharePoint Server maintains the user identity and date for each modification, as it does for all data changes.

As in source control, the Access client downloads the entire database only when a user doesn’t already have a local copy. Subsequently, Access fetches only objects and data items that have changed. This arrangement is much more efficient than working with applications saved monolithically in document libraries and provides noticeably faster performance. Different users can make changes to different objects or different data items in a list without causing conflicts. When data conflicts do occur, a conflict resolution wizard enables the user to choose which version of data items to preserve. For object conflicts, Access provides the name of the user who made the saved change and creates a renamed copy of the local object before downloading the other user’s changes. This fosters collaboration on resolving any design conflicts and ensures that no design work is lost.

Publishing also enables greater administrative control. Permissions can limit some users from being able to modify, delete, or create objects in a site, while still allowing them to run the application. In addition, because application objects are stored individually as list items rather than monolithically in document libraries, the throttles in SharePoint Server for limiting list traffic and user activity can apply to individual types of application objects.

Because of the many advantages of using published databases rather than document libraries to centralize application storage on a computer that is running SharePoint Server, document libraries should only be considered for storing legacy Access databases that cannot be upgraded to Access 2010, or when a server supporting Access Services isn’t available.

 

Web Databases, Web Objects, and Client Objects

To create an Access 2010 Web database, you can choose Blank Web Database when creating a new one or you can publish an Access 2010 database that wasn’t originally created as a Web database. If it publishes successfully, it automatically becomes a Web database.

In a Web database, you can create two types of objects: Web objects that can run either in a browser or in the Access client, and non-Web objects that can only run in the Access client. All design changes for all types of objects must be made in the Access client.

When you are working on a Web database in the Access client, the Create ribbon refers to non-Web objects as Client objects, specifically Client Forms, Client Reports, Client Queries, and Client Macros. That terminology might be confusing, because these so-called “client” objects actually do get published to the server along with your Web objects. Any design changes you make to them are propagated to the server when you synchronize, and you also receive any synchronized changes made to them by other users, just like with Web objects. What makes non-Web objects special is that they can only run in the Access client, not in a browser. They use the Web for synchronizing design changes but not for execution. All linked tables are client tables, and only client objects can see them, but the definitions of the links get synchronized with the server like other client objects. Synchronization provides a useful way to collaborate and deploy applications, even if all the objects in the database are client objects and all the tables are client linked tables.

To create Web objects, you must be working in a Web database, where you can create Web objects even before you’ve published. To add Web objects to a non-Web database, you must first publish it. However, you can create a Web database from scratch and add Web objects even before you’ve published. When you create local tables in a Web database that you haven’t yet published, the table schema is guaranteed to be compatible with SharePoint lists. So it is very useful to create a Web database when you start a new application, even if you have no immediate plans to publish it.

The only data available to Web objects is the data contained in the application’s Web tables. Only client objects can work with linked tables. In the Access client, when connected to the computer that is running SharePoint Server, data and design changes to Web tables automatically synchronize with the server, and Access works against local copies of the tables. When disconnected, Access seamlessly continues to work against the local copy, and doesn’t allow design changes to the tables. When reconnected, Access notifies users that they can now synchronize and resolve any conflicts. Design changes to objects other than Web tables synchronize only when users explicitly request synchronization by clicking the Sync All button in the Info section of Backstage view. Backstage view is the name for the view displayed in the File menu.

Web objects support the same feature set whether running in a browser or in the Access client, but some features of client objects are not supported by the corresponding Web object types. For example, VBA code only runs in client forms, not Web forms. Web forms rely on macros for programmability, but this is less of a restriction than you might expect, because Access 2010 macro capabilities are significantly enhanced. Separate sections below, under the heading Access 2010 Features by Object Type, provide more details on how the different types of Web objects differ from their client counterparts.

 

Deploying Access databases in SharePoint Technologies

The following sections provide an overview of several different topologies that are supported for integrating Access and SharePoint Technologies.

Storing databases into SharePoint Document Libraries

Access 2007 supports the use of SharePoint document libraries for centralizing storage and deployment of Access applications. This includes support for Access forms and reports in databases stored in document libraries. When users open these, the databases automatically open in the Access client. Users can move entire applications to SharePoint libraries. The applications always run in the Access client, and they are downloaded only when a user first opens the application or when the server version is updated. One restriction is that the design objects in these applications are read-only. To make design changes, a user must work on another local copy and upload the new version, replacing the old one on the server. These applications can work with local Access tables, data in SharePoint lists via linked tables, or any other supported external data source.

Moving forward, document libraries should only be used as legacy support for users who have not upgraded to Access 2010 or for SharePoint Technology installations that do not support Access Services. When Access Services are available, they provide many advantages over using traditional Access applications stored in documents libraries. Access 2010 applications published to SharePoint Server using Access Services support design changes and allow multiple users to collaborate on design. Design changes are tracked per object rather than per project, resulting in fewer conflicts. In addition, Access Services supports the addition of Web objects that users can access through a browser, without depending exclusively on the Access client.

Publishing an Access Services application

Access 2010 combined with Access Services introduces the ability to publish a database to SharePoint Server. A site is created for the database and tables are stored as SharePoint lists. Web forms will be available from within a browser and the site and data will be backed-up and permission levels can be maintained by SharePoint Server. The publish process moved the data from the database to SharePoint Server and converts the tables to linked tables. Users will have the option to use the Web objects in the browser or open the database from the website in their client, enabling access to the client objects that are also stored inside the site.

Users also can link to SharePoint lists from databases that are not Web databases and that will never be published to SharePoint Server. For example, a user might create a simple Web database to collect data from other users over the Web. In a separate application, the user can link to that data and create reports that combine the data with other data sources.

Linked SharePoint lists in Access 2010 also have the same support as Web tables for offline work. Disconnected users can view or modify the data offline and then synchronize with the server when they reconnect. In addition, users can work through the standard Web interfaces in SharePoint Server to work with list data, even if the data isn’t in a published Access Web database.

Publishing Client-only Applications

Even with Access 2010 applications rely exclusively on linked data from external Access databases, spreadsheets, database servers, web services, or from linked SharePoint lists, there are advantages to publishing the applications. For the users, published applications support convenient deployment, versioning and collaboration. For IT managers, published applications benefit from the backup, security, and manageability features in SharePoint Server.

By adding Web tables to these applications, users have the ability to extend their applications to include some forms and reports that run on the Web.

Hosted SharePoint Server Options

For organizations or users that do not have Enterprise CAL SharePoint Server licenses or do not want to maintain their own installation of SharePoint Server, Microsoft and third parties provide hosted options with economical monthly per-user rates. These options include multi-tenant hosting where data from multiple organizations is segregated on one server, or dedicated options that provide the added assurance of complete segregation on dedicated servers maintained by the service provider.

Migrating Legacy Data to Web Tables

Web objects in Access 2010 can only work with data in an application’s Web tables, which are implemented on the server as SharePoint lists. To create Web forms and reports that work with legacy data, users must import their data into local Access tables and publish the database, or import the data into existing Web tables in the database. Publishing succeeds only when the table schema and the data itself in local tables are compatible with SharePoint lists.

Some or all of the legacy data in an application can remain in external data sources that appear in Access as linked tables, but this data isn’t available to Web objects. Linked table data is only available to client forms, reports, queries, and macros running in the Access client.

Using the Web Compatibility Checker

The Web Compatibility Checker inspects table designs and logs each incompatibility that it finds in a table named Web Compatibility Issues. You can run the Web Compatibility Checker by right-clicking on a table and selecting Check Web Compatibility, or click the Run Compatibility Checker button that appears when you choose to Publish to Access Services in the Save and Send section of Backstage.

Handling Compatibility Issues

The most common compatibility issues found by the Web Compatibility Checker involve invalid names of tables or columns, compound multi-column indexes, incompatible lookup definitions, composite and text-based keys, and the use of table relationships to enforce referential integrity.

Invalid Names

Table and column naming restrictions are described in the previous section on tables. You must ensure that your names do not conflict with SharePoint reserved words and do not contain illegal characters. The Access Name AutoCorrect feature will propagate changes to dependent objects, such as queries and bound controls, but you should thoroughly inspect and test the application to ensure that no required changes were missed. VBA code and values in all expressions, for example, are not automatically corrected.

Compound Indexes

Indexes based on multiple columns are not supported in Web applications, as explained in the section on tables above.

Lookup Definitions

Access tables support queries in lookup definitions that are not supported in SharePoint lists. SharePoint Server requires the input source to be a single table with a Long Integer primary key. SQL queries in lookup definitions also must not contain the DISTINCT or DISTINCTROW keywords. When lookups use value lists, the bound column must be the first column.

 

 

Referential Integrity

Declarative referential integrity is not supported for SharePoint lists. Instead properties have been added to SharePoint Server 2010 lookups to enforce data restrictions. Users can opt to prohibit insertions, deletions, and updates that would create “orphaned” rows in “child” lists.

For example, suppose you have a list of employees with a column showing the department of each employee. The Department column is a lookup to a separate Departments list. Using the Lookup Wizard in Access to configure the column, you can select Enable Data Integrity, as shown in the following figure. This prevents an employee being assigned to a department that doesn’t appear in the Departments list.


In some cases, you want to restrict deletions in parent tables to avoid creating orphans, as with Departments and Employees, but in other cases you want those deletions to propagate, or “cascade,” to the child list. For example, with Orders in one list and Order Items in another, you may want to allow users to delete an order and automatically delete the related line items for that order. In that case, you choose Cascade Delete rather than Restrict Delete in the Lookup Wizard.

These lookup properties are also supported in unpublished Access 2010 databases for local Access tables, but they are separate from the Relationships window that users are familiar with for configuring referential integrity in previous versions. Users must configure a lookup and set Enable Data Integrity before publishing an Access table to SharePoint Server. If referential integrity has already been configured using the Relationships window in Access, then users will have to delete the relationship before they can use the Lookup Wizard, which is invoked from the list of field types in the table designer. Tables with relationships that aren’t implemented in lookups can’t be published, and those lookups must be based on columns that have a numeric data type of Long Integer.

Primary and Foreign Keys

In non-Web local Access tables, Access supports the use of composite primary and foreign keys, which combine the values in two or more columns to create the key. Access also supports the use of a variety of data types for primary and foreign keys, including text and dates. These are not supported for published applications, because composite and string values cannot be used to create SharePoint Server lookups. String values can be displayed in lookups but the underlying relationship is always based on a numeric ID.

If users have composite primary and foreign keys based on multiple columns or even on text columns, which is quite common, they will need to change to using a Long Integer numeric key before they publish, or they will receive a compatibility error.

The easiest way to achieve compatibility is to add an autonumber column to the parent table, such as Department, and to add a corresponding Long Integer column in the child table as the foreign key. You can then use an update query to place the correct foreign key values in the child table.

For example, if you have a Departments table with a text primary key named Department and you’ve used these department names as foreign keys in the Employees table, add an autonumber DepartmentID column to the Department table and a Long Integer DepartmentID column to the Employees table. Then run this Access query:

UPDATE Departments INNER JOIN Employees ON Departments.Department = Employees.Department SET Employees.DepartmentID = Departments.DepartmentID;

You would also need to delete the old relationship and create a new one using the Lookup Wizard. Additional work would be required to change existing queries, forms, reports, and VBA code to use the new DepartmentID column in the Employees table, allowing you to delete the old text-based foreign key.

 

Creating New Compatible Tables and Importing Legacy Data

In a Web database, the table designer restricts the available options to ones that are Web compatible. It is often much easier to use this designer to create new tables than to work through all the issues raised when attempting to publish an incompatible legacy table. Create Lookup columns to enforce referential integrity, as explained in the previous section covering referential integrity.

You can then create a linked table pointing to your legacy data and create a (client only) append query to append data from the legacy tables to the new Web-compatible tables.

One disadvantage of this technique is that you cannot use Name AutoCorrect to fix up names used in legacy dependent objects such as queries and bound controls. You will need to do this manually and carefully test for errors. An alternative is to create client queries with columns aliased to the original names, which can simplify the changes required in dependent objects.

Synchronizing data between web tables and external sources

Web forms, web reports, and web queries cannot work with data from external data sources, such as SQL Server tables or SharePoint lists outside the current application site. To work around this limitation, you may want to build administrative applications that regularly copy data from external sources into the SharePoint lists of Web applications. This enables you to include the data in Web reports or display it read-only in Web forms. Several different strategies can support this.

One option is to create Access linked tables that connect to the external data, and to execute client queries that move the data into your Web tables. These queries can exist in the Web database you want to update or in another database that has tables linked to the SharePoint lists corresponding to your Web tables.

If the Web data that you want to maintain is not used in any lookups, then you can execute queries that first delete all the old data and then append the current data. If lookups require you to preserve existing key values in the data, then you can use a more complex process that updates existing values and handles deletions. In some cases, related child rows may also need to be deleted.

Another option is to perform the data maintenance on a local disconnected copy of your Web database and to rely on Access/SharePoint Server synchronization to propagate the changes automatically when the database is reconnected to the server.

To ensure that data maintenance runs automatically on a defined schedule, you could create a SQL Server Integration Services package. Alternatively, you can use a Windows scheduled task to open an administrative Access application with an autoexec macro or a startup option that executes the data maintenance and closes the application. You can also execute an Access macro from a command line using the /x command-line switch.

Migrating Legacy Application Objects

You must recreate as Web objects any form and reports that you want users to run in a browser. You must also recreate as Web objects any supporting queries and macros. You need to create Web-compatible macros to replace any VBA code. Controls from legacy forms and reports cannot be copied and pasted into Web forms and reports, but control formatting can be copied and pasted.

Legacy application objects can remain in the database without interfering in any way with publishing, and design changes that you make can be synchronized with the server version of the database, enabling easy versioning and deployment. However, these objects can run on the Access client only. Centralizing storage of application objects on a computer that is running SharePoint Server by publishing the application to Access Services improves manageability even for databases that don’t contain any Web objects and always run in the rich Access client.

Handling Publishing, Compilation, and Runtime Errors

The Web Compatibility Checker inspects table designs and logs each incompatibility that it finds to a local table named Web Compatibility Issues. The most common incompatibilities relate to primary keys and lookups, which were discussed in the previous section on handling compatibility issues.

However, the Web Compatibility Checker doesn’t detect certain types of incompatibilities, which either cause publishing errors when Access attempts to publish incompatible schema that the Web Compatibility Checker overlooked, or when Access attempts to populate tables with incompatible data.

After publishing succeeds, Web objects compile asynchronously and can generate compile errors. Even after successful compilation, runtime errors can result from invalid object definitions that don’t interfere with compilation or from logic errors.

Publishing Issues

Access logs publishing issues in a local table named Move to SharePoint Site Issues, and the message informing you that publishing failed provides a link to the table.

Most schema issues that cause publishing to fail after the Web Compatibility Checker reports success are related to expressions. The Expression Builder and IntelliSense guide users toward creation of valid expressions in Access, but users can easily enter invalid expressions and the Web Compatibility Checker does not evaluate them. As explained in the previous section on expressions, the expression services used on the client and on the server are different, and some expressions that are valid on the client are not valid on the server.

Another common cause of publishing errors is incompatible data, because the Web Compatibility Checker does not check data values, only data schema. Data values that are valid in Access but not in SharePoint Server will generate errors that are also logged to the local Move to SharePoint Site Issues table.

The following sections discuss several types of data incompatibility.

URLs

The Hyperlink data type in Access uses an underlying memo column to store display text and URLs. SharePoint Server also supports Hyperlink columns. However, it performs validation on Hyperlink URLs that may reject data contained in Access Hyperlink columns. For example, relative URLs are incompatible and must be replaced with ones that are fully qualified. In addition, SharePoint Server rejects URLs with more than 255 characters.

Dates

Access and SharePoint Server both store date/time values using the Double data type. The integer portion of the number represents the number of days since day 0, and the fractional part of the number represents time as a fraction of a full day. However, the two systems use different timelines, and SharePoint Server does not support dates with underlying numeric values less than 1. In Access, day 1 is December 30, 1899, and prior dates are stored as negative numbers. In SharePoint Server, day1 is January 1, 1900, and prior dates are not supported.

Many legacy Access applications contain dates that cannot convert to SharePoint Server. A common practice in Access is to use day-0 dates in columns designed to show time-only values. In addition, data entry errors in Access applications frequently result in dates prior to 1900, unless such errors have been prevented by validation rules.

To check for Access date values that are not Web compatible, you can create a query that filters for dates prior to January 1, 1900, for example:

SELECT InvoiceID, InvoiceDate FROM Invoices WHERE InvoiceDate < #1/1/1900#;

Compilation Issues

Invalid expressions in data schema definitions, such as in validation rules and calculated columns, can cause publishing errors. However, invalid expressions in Web forms, reports, and queries surface only when the objects compile, which occurs asynchronously after publishing has succeeded.

Access Services logs compilation errors to the USysApplicationLog table, which is accessible through the View Application Log Table button in the Info section of Backstage view. Access uses the status bar to cue the user when issues are pending in the application log.

Runtime Issues

Even after publishing and compilation succeed, invalid expressions can still cause runtime errors. For example, invalid expressions in form or report control sources don’t surface only when the object executes, causing the control to display #Error.

When a macro fails at runtime, Access 2010 records the error in the application log.

Another type of runtime error relates to images. All the images in a Web application are available through a single image gallery and must be uniquely named. When synchronizing design changes, image naming conflicts may be resolved by appending “_username” to the name of a new image. This won’t generate an error, but the new or modified form or report may unexpectedly display the wrong image, because the reference is to the original name. The affected image names and control properties must be modified to correct this.

Upgrading Databases to Access 2010

Access 2010 supports the mdb file format for backward compatibility, but to use the new features, including support for Web databases, you must use the accdb format that was introduced in Access 2007. If you have a legacy database that you want to upgrade to take advantage of Access Services, you must convert it to an accdb file. You can expect a smooth upgrade for Access 2007 databases that are already in the accdb format.

For guidance on upgrading an mdb to an accdb, see the white paper, Transitioning Your Existing Access Applications to Access 2007 (http://msdn.microsoft.com/en-us/library/bb203849.aspx).

64-bit VBA Issues

Office 2010 provides 64-bit support primarily to enable Excel and Project users to work with a much larger address space. There are no advantages to running the 64-bit version of Access 2010. However, users who need 64-bit support for Excel or Project may try to run Access and could encounter some incompatibilities in applications that run fine in 32-bit mode. When 64-bit Office is installed on a machine, the user is required to uninstall any 32-bit versions of Office applications, including prior versions. 32-bit versions can be installed after 64-bit is installed, but Microsoft has not thoroughly tested these scenarios. The best practice is to run any 64-bit Office instance on a machine dedicated to that version only. 64-bit Office does not support any ActiveX controls or any COM add-ins.

All compiled VBA must be recompiled in 64-bit instances, meaning that mde and accde Access applications will not run. VBA code containing Declare statements must be rewritten before being recompiled, because pointer and handle values can no longer be contained in variables using the Long data type. Instead they must use the new LongLong or LongPtr data types. In addition, the new PtrSafe indicator must be added after the Declare keyword (Declare PtrSafe…) for the code to compile successfully. Conditional compilation, using #If, must be used in code that needs to compile under both legacy 32-bit and new 64-bit versions of Office. A convenient workaround is to give the few users who really need 64-bit support separate machines for running the applications that need the added memory space.

Summary

Access provides compelling benefits to end users, who love being able to create their own data tracking and reporting applications, and to IT departments that cannot otherwise fulfill all the application building requirements of their organizations. Access 2010 significantly extends its value proposition for users by integrating with SharePoint Server 2010 to support convenient creation of full-featured Web applications with broad reach, and client applications that are easily shared, revised, and deployed. Of equal significance are the manageability improvements provided through this deep integration with SharePoint Server.

 

Access 2010 Features by Object Type

The following sections discuss the technologies used for implementing the various types of Web objects, the differences between Web and client objects, and many of the new features introduced in Access 2010.

Tables

Web databases do not support local tables. They must be compatible with SharePoint lists or publishing will fail, and they are always converted to SharePoint lists when you successfully publish or synchronize the database. This is enforced in the table view, which is used to modify tables in 2010. It only allows you to create schema that is compatible with SharePoint Server when you are working in a Web database. Not all system tables are moved to SharePoint Server. System tables other than log tables are not stored as tables on SharePoint Server. Any tables in a Web database that are linked to SharePoint lists outside the application’s site, or to any other type of external data, can only run in the Access client. Linked tables can’t be seen by Web objects in the database, even when running in the Access client. Web objects only use Web tables, which transparently link to SharePoint lists in the application site.

A configurable administrative setting determines the maximum size of attachments in Web tables, which may interfere with publishing or synchronizing if you have an attachment that is too big. The default limit is 50 MB. Web tables don’t support multiple attachment columns in one record, and when Web tables are taken off line, you can’t add an attachment.

SharePoint Server doesn’t support certain table names that are allowed in Access client applications, and tables with illegal names will prevent publishing. The following illegal names conflict with reserved SharePoint list names: Lists, Docs, WebParts, ComMd, Webs, Workflow, WFTemp, Solutions, and ReportDefinitions. In addition, the following illegal names conflict with tables created during publishing: MSysASO, MSsLocalStatus, UserInfo, and USysApplicationLog.

To be publishable, table column names, as well as all Access objects, cannot contain the following characters and character pairs: /, \\, :, *, ?, \”, <, >, |, #, {, }, %, ~, &, \t, ;.

SharePoint lists implement a form of indexing to speed filtering performance, and single-field indexes in Access tables propagate to the server. However, SharePoint Server doesn’t support indexes composed of multiple columns, and multi-column Access indexes prevent tables from being Web compatible. This also means that composite keys and uniqueness constraints are not Web compatible. A workaround for enforcing uniqueness based on multiple column values is to use a BeforeChange data macro, discussed in the section on data macros below.

Native Access tables limit the total number of columns (to 255) and the total number of characters in a row (4000 with Unicode compression set), but they do not limit the number of columns of a particular type. To be Web compatible, however, tables must also conform to the SharePoint Server 2010 limits for each data type, which are 5 times greater than the limits enforced in previous versions. Here are default limits for how many columns of a type you can have in a SharePoint list: date/time 40, bit 30, uniqueidentifier 5, number 60, note 160, and text 320.

The maximum number of lookup columns and multi-value columns is also limited. Published memo column values are truncated if they contain more than 8,192 characters.

Both native and Web tables support calculated columns based on expressions. This is a new feature in Access 2010 that can provide significant performance improvements compared to calculating values when queries execute. Centralizing the calculation definition in the table also improves reliability and consistency. If the application logic changes, you make the change in one place, rather than attempting to find every place the calculation was used. Because the calculation is maintained by the database engine, this storage of a calculated value does not violate normalization rules. Expressions that are not Web-compatible, which might also appear in column and table validation rules, will prevent publishing, even in Web databases that appear to meet the requirements of the compatibility checker. The section below on expressions provides more details on expression compatibility.

Forms

Web forms in Access 2010 enable users with no Web development experience to create full-featured Web pages. The design experience is familiar to anyone who has worked with Access client forms, and easy to learn for new users. On the server, Access Services does the work of translating the user’s design into ASP.NET pages that run in any standard browser (IE7, IE8, Firefox, and Safari are all explicitly supported). These pages do not use any ActiveX controls. Macros that users attach to form and control events are implemented as JavaScript code. Popup forms are implemented as floating divs. Design themes are implemented as CSS style sheets. The resulting pages are highly responsive and frequently employ asynchronous JavaScript with XML (AJAX) to refreshing views rather than performing full postbacks, which provides snappy performance. The browser Back and Forward buttons are fully supported.

Like all Access objects in Web applications, forms are serialized using the open Access Application Transfer Protocol (MS-AXL), which is documented at http://msdn.microsoft.com/en-us/library/dd927584.aspx. Forms also make use of the open standard XAML protocol used by Windows Presentation Foundation (WPF). Access uses these protocols to synchronize design changes with the server and to generate ASP.NET pages.

In the Access designer, Layout view for creating Web forms and reports is enhanced to make it much easier to control the exact positioning of controls by splitting and combining columns and cells. On the server, these layouts get implemented as hidden HTML tables. Design view is not available for Web forms.

Experienced Access users with little or no Web development experience are likely to be delighted at their ability to create attractive, highly functional pages using only the skills they’ve developed in Access. Several new form features are geared specifically toward creating “Web-like” interfaces.

For example, the new Navigation form creates Web forms (or client forms) with versatile hierarchical menus than can display other forms and reports embedded in the resulting page. Parameters can filter the record source query of the displayed object. A new Browser control supports parameterized URLs based on form control values, enabling easy creation of “mashup” interfaces that embed maps or other context-specific external content.

Web forms provide parity in the feature sets available in the browser and in the Access client, as do all Web objects. However, an application may need to behave differently based on the runtime environment. Web applications support separate Web and Client startup form properties, available in Current Database settings in Backstage view. In addition, the IsServer and IsClient expressions return True or False, allowing macros to branch based on the runtime environment. Web forms “just work” in the client with no special programming required. The figures that follow show the same form, first in the Access client and then in a browser.



One difference between Web and client forms could occasionally cause confusion: client forms allow expressions to reference all columns that appear in the record source of the form, even if those columns are not bound to controls on the form. However Web forms only support references to columns that are bound to controls on the form. Users may need to add invisible controls to work around this limitation. The use of such hidden controls is a common practice in Access client reports, which have always enforced a similar restriction.

Reports

Web reports use SQL Server Reporting Services. They are deployed to the server using AXL and Report Definition Language (RDL). As described below, the Access Database Service mediates all data access to the SharePoint lists of a Web database, providing the same caching behavior and performance benefits available to forms.

Web reports support exporting to PDF from the browser, which provides a great printing experience, and users can also export to Word or Excel document formats. A handy new feature in Access 2010 enables subforms to host reports, and Navigation forms display reports by using this feature.

Both forms and reports support standard and custom Office Themes for configuring display appearance. Organizations can specify preferred themes as a way of encouraging design consistency. When a user changes a database theme, all the forms and reports that use the theme are affected. To propagate theme changes in Web objects to the server, you must open the objects in layout view, save them, and then synchronize. This pattern also applies to Name AutoCorrect, an Access client feature that propagates name changes to dependent objects. The dependent objects aren’t renamed until you open and save them, and you can then synchronize to rename the corresponding objects on the server.

To publish and synchronize successfully, Web forms, reports, and controls, as well as all other Access objects, cannot contain the following characters and character pairs: /, \\, :, *, ?, \”, <, >, |, #, {, }, %, ~, &, \t, ;. In addition, Web form and report controls must have names that begin with an upper or lower case letter or an underscore (no numbers), and they must contain only upper or lower case letters, underscores, or numbers (the naming rules for C# variables). This is more restrictive than the list of allowed characters in names for client forms and reports.

In Web forms, record source queries that include lookup columns automatically join to the related tables to display the text values for lookups. However, in Web reports this automatic joining behavior for lookups doesn’t occur — you must explicitly add the related tables to your report queries. Design view is not supported for Web reports.

Queries

Web queries are stored and implemented by Access Services using AXL and Collaborative Application Markup Language (CAML). The query processor in Access Services caches execution plans as well as data, and pushes as much filtering to the server as possible to improve performance. On disconnected clients, Access uses the client-side ACE query processor to execute Web queries against locally cached data. Client queries can work with Web tables or linked tables from other data sources and can join the two.

SQL View is not available for Web queries in the query designer. Users must employ the Access query design grid, which enforces Web query restrictions. In the Access client, you can still use VBA to get a Web query’s SQL representation by retrieving the SQL property of a DAO QueryDef object. In the Immediate Window of the Visual Basic Editor, enter the following, replacing “QueryName” with the actual name of your query:

?CurrentDb.QueryDefs!QueryName.SQL

Web queries support projection of selected columns from multiple joined data sources, including both inner and outer joins. The data sources for a Web query can include other saved Web queries in addition to tables. Web queries also support filtering based on multiple criteria, including expressions, sorting based on multiple columns, and calculated columns based on expressions.

However, Web queries do not support the full range of features available in client queries. They do not support aggregates, crosstabs, unions, Cartesian products (cross joins), subqueries, or any actions that modify data or create new objects.

Several client query properties are unavailable in Web queries, including Output All Fields (“SELECT *” in SQL), Top Values (TOP), Unique Values (DISTINCT), UniqueRecords (DISTINCTROW), Max Records, and Subdatasheet properties.

Although Web queries cannot calculate aggregate values, aggregation is fully supported on Web reports. In addition, as explained below, data macros can maintain aggregate values in tables.

Web queries support the use of parameters. Macros that open forms, reports, and query datasheets, and ones that set subform source object property values with the new BrowseTo macro action, all allow you to specify parameter values for the record source queries of these objects. You can use any valid expression to set the parameter, including expressions that refer to data values in form controls.

Traditional Access client queries are very flexible about how users can define parameters. A Parameters dialog allows users to specify the name and data type of each parameter, but for most queries this is optional. Users can simply enclose any text in square brackets and the Access client query processor treats the value as a parameter name unless it is the name of a column in the query. In Web queries, however, all parameters must be explicitly defined with the Parameters dialog.

Expressions that define calculated columns or criteria in Web queries must be compatible with the Excel-based expression library that Access Services uses. For more information on this, see the Expressions section below. In addition, expressions in Web queries cannot reference controls on forms or macro variables.

Configurable administrative settings limit many aspects of Web query design to protect resource usage. For example, you can limit the number of outer joins, which are resource intensive, in addition to the number of output columns, data sources, etc. See the section covering administration below for more details. Also, note that each lookup column a query uses adds an extra data source, because of the hidden join needed to retrieve the displayed value.

Macros

Web objects do not support VBA code. Programmability for Web objects relies instead on Access macros, which have been significantly enhanced to provide greater ease of use, security, resilience, and manageability.

The macro designer was completely recreated in Access 2010 to improve ease of use and to support more complex logic. Users select from context sensitive options to generate readable, structured, collapsible blocks of code. Full IntelliSense guides users to appropriate syntax and argument values.

Access 2010 supports two different types of macros. UI macros, which are simply called Macros in the Access user interface, extend the capabilities of traditional Access macros to respond to user actions and to control application flow and navigation. In Web forms running in a browser, these macros are implemented as JavaScript. Data macros, which are new in Access 2010, are similar to SQL triggers. They run in response to data modifications. On the server, data macros are implemented as SharePoint Workflow actions.

These two types of macros create a clean separation between presentation tier and data tier code in Access applications. An architectural shortcoming of many traditional Access applications that rely heavily on VBA code is that they often muddy the distinction between these logical tiers. UI macros can only perform data-related actions by calling named macros, which are saved data macros that aren’t attached to specific table events. Data macros can also call named macros, supporting code reuse and maintainability. Named macros support parameters, as shown in the following figure.


In addition to parameters, data macros support the use of local variables, and UI macros support both local and global variables. Originally added in Access 2007, macros support robust error handling, and for debugging, the MacroError object provides properties, such as Number, Description, and ActionName, which you can record in the application log with the new LogEvent action. You can also nest If/Then/ElseIf/Else blocks to create complex conditional logic. Macros can serialize to XML or load from XML, to support sharing and reuse.

All macros that run in Web objects on the server are mediated by Access Services with configurable throttles to ensure safe execution. Web macros support a subset of the macro actions that client macros support, and client macros can use a sandboxed subset of actions to create applications that don’t require full trust.

 

 

Data Macros

Data Macros, introduced in Access 2010, are available for Web tables in Web databases or native tables in unpublished databases. Even tables linked to Access data in other databases support data macros, but the data macros must be defined in the database containing the native tables, not the links. Data macros use an event model similar to that of triggers in SQL Server, to enable reliable enforcement of data rules.

Once you define a data macro for a table event, that macro will run no matter how the data is accessed. This provides a significant new capability for Access that enables much more application reliability than was previously available for Access tables.

In past versions, the Access database engine was able to enforce referential integrity defined in relationships between tables, and domain integrity enforced by table-level and column-level validation rules. Users could also configure columns to enforce rules for unique and required values. (These constraints are all still supported in Web databases.) Any other rules for data, however, relied on logic in data entry forms for enforcement. Developers attempted to prevent users from circumventing the rules by hiding tables from view, but this was never foolproof. In addition, having data rules enforced in multiple forms and possibly in multiple applications invited inconsistency.

In tables that have been published to SharePoint lists, the rules are enforced on the server by SharePoint Workflow actions. In native Access tables, data macro execution is enforced locally by the Access database engine, which provides parity with the actions available to server-side data macros.

Here are a few examples of data macro scenarios you might find in a Donations Management database:

●    Validate that a contributor doesn’t have outstanding donations before accepting a new donation.

●    Keep a history of any changes to a donation record.

●    Send a “Thank You” email when a contributor makes a donation greater than $1,000.

●    Maintain a total of all donations and the last donation date in summary columns of the contributors table. (Although Web reports support aggregates, Web queries do not. So using data macros to maintain aggregate values in tables can be useful.)

You can attach data macros to the BeforeChange, BeforeDelete, AfterInsert, AfterUpdate, and AfterDelete events of tables. Data macros attached to After events, and all UI macros, can call named data macros associated with a table. The calling macros can pass in parameter values and can get back a collection of return values as well as errors. Errors are also logged to the USysApplicationLog table, which is easily discovered in Backstage view and is maintained in both Web and non-Web databases.

The BeforeChange and BeforeDelete events are designed to support fast, light-weight operations. Data macros attached to these events can inspect the old and new values in the current record, and can compare them with a record in the current table or another table by using LookupRecord.  They can also use SetField to alter data in the row being changed or prevent the change from occurring.  To assure that the operations remain lightweight, however, they cannot iterate over a collection of records.  The BeforeChange event fires for both inserts and updates, but data macros can use the IsInsert expression to distinguish the type of operation.

The AfterUpdate, AfterInsert, and AfterDelete events can support more long-running operations that require iteration. The old and updated values are available, and macros invoked from these events can inspect and modify other records in the table or in other tables. Typically, users should not use these events to modify the current record; the BeforeChange and BeforeDelete events are more appropriate.

Occasionally, a user may need a data macro to modify the current record, potentially causing the macro to be called again recursively. Data macros are limited to 10 levels of recursion, but they can call the Updated (“FieldName”) function, which returns True or False, to determine which column or columns were affected by the current change. Judicious use of this feature can usually prevent cyclical recursion.

A few cautions concerning data macros:

●    In some instances, when SharePoint lists are taken offline in disconnected Access applications, data macro execution is delayed until the user reconnects. Data changes made on the disconnected client are automatically propagated to the server when the connection is restored, and the data macros run on the server.

●    Unlike SQL Server triggers, data macros are not performed within a transactional context. Access 2010 does not provide transactional contexts for any serial data operations, which are all atomic.

●    Data macros cannot process data from multi-valued or attachment columns.

●    Access 2007 SP1 can read but not write data in linked Access 2010 tables with data macros, because the Access 2007 data engine can’t execute them.

Expressions

Access supports the use of expressions in form and report control sources and events, query criteria, calculated columns in queries and tables, validation rules for tables, columns, and controls, default values for columns or controls, and macro arguments.

Access expressions are similar to Excel formulas, and Access Services uses a modified version of the Excel Calculation Services library. One important modification is to support the use of database nulls. However, this library does not provide complete parity with the Access client expression service.

The Expression Builder is significantly improved in Access 2010 to show a context-sensitive list of available options and to provide IntelliSense support, which is available anywhere that users can enter expressions.

Incompatible expressions used in the validation rules or calculated columns of Access tables in unpublished client databases may not be detected by the compatibility checker and could cause compile errors when the database is published.

Access could also fail to detect an incompatible expression in the design of a Web object, causing a runtime error when the object executes. For example, a form control that displays “#Error” could indicate that its control source uses an invalid expression. A Web query containing an incompatible expression returns a runtime error indicating an invalid expression.

Access 2010 adds support for expression keywords targeted at Web applications. For example, you can use CurrentWebUser to get the email, display name or network name of the current user when IsServer is true.

Here are a few expression issues that can cause errors when executed on the server:

●    You must fully qualify control references: Use Forms!MyForm!MySubform.Form!MyControl, not MySubform.Form!MyControl

●    Don’t rely on type coercion. For example, If conditions must return Boolean values. Use If (15<>0), not If (15). When possible, use the Format function to convert expressions to the proper type.

●    Dates do coerce to doubles, but they use a different numbering system on the server. SharePoint Server does not recognize dates prior to 1/1/1900. Use FormatDateTime if you need to convert to strings.

●    For Booleans, use True/False, not -1/0.

●    Access Services doesn’t support the DateAdd, DatePart, and Date Diff functions. Instead, use DateSerial, Day, Month, Year

●    Field references in expressions in forms must refer to fields used in bound controls.

●    You can’t use the Between operator, which is commonly used in expressions in query criteria. Use >= and <= instead.

Expressions in legacy databases require scrutiny to assure successful publishing, but the new server-based expression service is very full-featured and supports almost all the tasks that Access users have come to expect. In addition, the new Expression Builder makes it much easier to create compliant expressions in Web objects.

 

This is a preliminary document and may be changed substantially prior to final commercial release of the software described herein.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred.

© 2009 Microsoft Corporation. All rights reserved.

Microsoft, Excel, InfoPath, MSDN, the Office logo, Outlook, PowerPoint, SharePoint, Visual Basic, Visual Studio, Win32, and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Explore Microsoft SharePoint 2013

  1. Configuring the Base Configuration test lab.
  2. Installing and configuring a new server named SQL1.
  3. Installing SQL Server 2012 on the SQL1 server.
  4. Installing SharePoint Server 2013 on the APP1 server.
  5. Installing and configuring a new server named WFE1.
  6. Installing SharePoint Server 2013 on WFE1.
  7. Demonstrating the facilities of the default Contoso team site on WFE1.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring the intranet collaboration features on APP1.
  3. Demonstrating the intranet collaboration features on APP1.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Create a My Site site collection and configure settings.
  3. Configure Following settings.
  4. Configure community sites.
  5. Configure site feeds.
  6. Demonstrate social features.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring AD FS 2.0.
  3. Configuring SAML-based claims authentication.
  4. Demonstrating SAML-based claims authentication.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring forms-based authentication.
  3. Demonstrating forms-based authentication.

Manually Back Up Team Foundation Server Visual Studio 2012

Manually Back Up Team Foundation Server

            Visual Studio 2012               
 
 
            This topic has not yet been rated – Rate this topic                         
 

 

You can manually back up data for Visual Studio Team Foundation Server by using the tools that SQL Server provides. As of Cumulative Update 2, TFS includes a Scheduled Backups feature to automatically configure backups. However, you might need to configure backups manually if your deployment has security restrictions that prevent use of that tool. To manually back up Team Foundation Server, you must not only back up all databases that the deployment uses, you must also synchronize the backups to the same point in time. You can manage this synchronization most effectively if you use marked transactions. If you routinely mark related transactions in every database that Team Foundation uses, you establish a series of common recovery points in those databases. If you regularly back up those databases, you reduce the risk of losing productivity or data because of equipment failure or other unexpected events.

If your deployment uses SQL Server Reporting Services, you must back up not only the databases but also the encryption key. For more information, see Back Up the Reporting Services Encryption Key.

The procedures in this topic explain how to create maintenance plans that perform either a full or an incremental backup of the databases and how to create tables and stored procedures for marked transactions. For maximum data protection, you should schedule full backups to run daily or weekly and incremental backups to run hourly. You can also back up of the transaction logs. For more information, see the following page on the Microsoft website: Creating Transaction Log Backups.

Note                   Note                

Many procedures in this topic specify the use of SQL Server Management Studio. If you installed SQL Server Express Edition, you cannot use that tool unless you download SQL Server Management Studio Express. To download this tool, see the following page on the Microsoft website: Microsoft SQL Server 2008 Management Studio Express.

In this topic:          

Required Permissions          

To perform this procedure, you must be a member of all the following groups:

  • The Administrators security group on the server that is running the administration console for Team Foundation.

  • The SQL Server System Administrator security group. Alternatively, your SQL Server Perform Back Up and Create Maintenance Plan permissions must be set to Allow on each instance of SQL Server that hosts the databases that you want to back up. 

  • The Farm Administrators group in SharePoint Foundation 2010, or an account with the permissions required to back up the farm.

        

Identify Databases              


            

Before you begin, you should take the time to identify all the databases you will need to back up if you would ever have to fully restore your deployment. This includes databases for SharePoint Foundation 2010 and SQL Server Reporting Services. These might be on the same server, or you might have databases distributed across multiple servers. For a complete table and description of TFS databases, including the default names for the databases, see Understanding Backing Up Team Foundation Server.

To identify databases

  1. Open SQL Server Management Studio, and connect to the database engine.

  2. In SQL Server Management Studio, in Object Explorer, expand the name of the server and then expand Databases.

  3. Review the list of databases and identify those used by your deployment.

    For example, Fabrikam, Inc.’s TFS deployment is a single-server configuration, and it uses the following databases:

    • the configuration database (Tfs_Configuration)

    • the collection database (Tfs_DefaultCollection)

    • the database for the data warehouse (Tfs_Warehouse)

    • the reporting databases (ReportServer and ReportServerTempDB)

    • the databases used by SharePoint Foundation 2010 (WSS_AdminContent, WSS_Config, WSS_Content, and WSS_Logging)

      Important note                               Important                            

      Unlike the other databases in the deployment, the databases used by SharePoint Foundation 2010 should not be backed up using the tools in SQL Server. Follow the separate procedure “Create a Back Up Plan for SharePoint Foundation 2010” later in this topic for backing up these databases.

        

Create tables in databases              


            

To make sure that all databases are restored to the same point, you can create a table in each database to mark transactions. You can use the Query function in SQL Server Management Studio to create an appropriate table in each database.

Important note                     Important                  

Do not create tables in any databases that SharePoint Products uses.

To create tables to mark related transactions in databases that Team Foundation uses

  1. Open SQL Server Management Studio, and connect to the database engine.

  2. In SQL Server Management Studio, highlight the name of the server, open the submenu, and then choose New Query.

    The Database Engine Query Editor window opens.

  3. On the Query menu, choose SQLCMD Mode.

    The Query Editor executes sqlcmd statements in the context of the Query Editor. If the Query menu does not appear, select anywhere in the new query in the Database Engine Query Editor window.

  4. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

    Note                           Note                        

    TFS_Configuration is the default name of the configuration database. This name is customizable and might vary.

  5. In the query window, enter the following script to create a table in the configuration database:

     
    Use Tfs_Configuration
    Create Table Tbl_TransactionLogMark
    (
    logmark int
    )
    GO
    Insert into Tbl_TransactionLogMark (logmark) Values (1)
    GO
    
  6. Choose the F5 key to run the script.

    If the script is well-formed, the message “(1 row(s) affected.)” appears in the Query Editor.

  7. (Optional) Save the script.

  8. Repeat steps 4−7 for every database in your deployment of TFS, except for those used by SharePoint Products. In the fictitious Fabrikam, Inc. deployment, you would repeat this process for all of the following databases:

    • Tfs_Warehouse

    • Tfs_DefaultCollection

    • ReportServer

    • ReportServerTempDB

        

            

After the tables have been created in each database that you want to back up, you must create a procedure for marking the tables.

To create a stored procedure to mark transactions in each database that Team Foundation Server uses

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, enter the following script to create a stored procedure to mark transactions in the configuration database:

     
    Create PROCEDURE sp_SetTransactionLogMark
    @name nvarchar (128)
    AS
    BEGIN TRANSACTION @name WITH MARK
    UPDATE Tfs_Configuration.dbo.Tbl_TransactionLogMark SET logmark = 1
    COMMIT TRANSACTION
    GO
    
  4. Choose the F5 key to run the procedure.

    If the procedure is well-formed, the message “Command(s) completed successfully.” appears in the Query Editor.

  5. (Optional) Save the procedure.

  6. Repeat steps 2−5 for every database in your deployment of TFS.  In the Fabrikam, Inc. deployment, the administrator, Jill, repeats this process for all of the following databases:

    • Tfs_Warehouse

    • Tfs_DefaultCollection

    • ReportServer

    • ReportServerTempDB

    Tip                           Tip                        

    Make sure that you select the name of the database you want to create the stored procedure for from the Available Database list in Object Explorer before you create the procedure. Otherwise when you run the script the command will display an error that the stored procedure was already exists.

        

            

To make sure that all databases are marked, you can create a procedure that will run all the procedures that you just created for marking the tables. Unlike the previous procedures, this procedure runs only in the configuration database.

To create a stored procedure that will run all stored procedures for marking tables

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, create a stored procedure that executes the stored procedures that you created in each database that TFS uses. Replace ServerName with the name of the server that is running SQL Server, and replace Tfs_CollectionName with the name of the database for each team project collection.

    In the example deployment, the name of the server is FABRIKAMPRIME, and there is only one team project collection in the deployment, the default one created when she installed Team Foundation Server (DefaultCollection). With that in mind, Jill creates the following script:

     
    CREATE PROCEDURE sp_SetTransactionLogMarkAll
    @name nvarchar (128)
    AS
    BEGIN TRANSACTION
    EXEC [FABRIKAMPRIME].Tfs_Configuration.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].ReportServer.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].ReportServerTempDB.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].Tfs_DefaultCollection.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].Tfs_Warehouse.dbo.sp_SetTransactionLogMark @name
    COMMIT TRANSACTION
    GO
    
  4. Choose the F5 key to run the procedure.

    Note                           Note                        

    If you have not restarted SQL Server Management Studio since you created the stored procedures for marking transactions, one or more red wavy lines might underscore the name of the server and the names of the databases. However, the procedure should still run.

    If the procedure is well-formed, the message “Command(s) completed successfully.” appears in the Query Editor.

  5. (Optional) Save the procedure.

        

            

When you have a procedure that will run all stored procedures for table marking, you must create a procedure that will mark all tables with the same transaction marker. You will use this marker to restore all databases to the same point.

To create a stored procedure to mark the tables in each database that Team Foundation Server uses

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, enter the following script to mark the tables with ‘TFSMark’:

     
    EXEC sp_SetTransactionLogMarkAll 'TFSMark'
    GO
    
    NoteNote

    TFSMark is an example of a mark. You can use any sequence of supported letters and numbers in your mark. If you have more than one marked table in the databases, record which mark you will use to restore the databases. For more information, see the following page on the Microsoft website: Using Marked Transactions.

  4. Choose the F5 key to run the procedure.

    If the procedure is well-formed, the message “(1 row(s) affected)” appears in the Query Editor. The WITH MARK option applies only to the first “BEGIN TRAN WITH MARK” statement for each table that has been marked.

  5. Save the procedure.

        

            

Now that you have created and stored all the procedures that you need, you must schedule the table-marking procedure to run just before the scheduled backups of the databases. You should schedule this job to run approximately one minute before the maintenance plan for the databases runs.

To create a scheduled job for table marking in SQL Server Management Studio

  1. In Object Explorer, expand SQL Server Agent, open the Jobs menu, and then choose New Job.

    The New Job window opens.

  2. In Name, specify a name for the job. For example, Jill types the name “MarkTableJob” for her job name.

  3. (Optional) In Description, specify a description of the job.

  4. In Select a page, choose Steps and then choose New.

  5. The New Job Step window opens.

  6. In Step Name, specify a name for the step.

  7. In Database, choose the name of the configuration database. For example, Jill’s deployment uses the default name for that database, TFS_Configuration, so she chooses that database from the drop-down list.

  8. Choose Open, browse to the procedure that you created for marking the tables, choose Open two times, and then choose OK.

    Note                           Note                        

    The procedure that you created for marking the tables runs the following step:

     
    EXEC sp_SetTransactionLogMarkAll 'TFSMark'
    
  9. In Select a page, choose Schedules, and then choose New.

    The New Job Schedule window opens.

  10. In Name, specify a name for the schedule.

  11. In Frequency, change the frequency to match the plan that you will create for backing up the databases. In the example deployment, Jill wants to run incremental backups daily at 2 A.M., and full backups on Sunday at 4 A.M. For marking the databases for the incremental backups, she changes the value of Occurs to Daily. When she creates another job to mark the databases for the weekly full backup, she keeps the value of Occurs at Daily, and selects the Sunday check box.

  12. In Daily Frequency, change the occurrence so that the job is scheduled to run one minute before the backup for the databases, and then choose OK. In the example deployment, in the job for the incremental backups, Jill specifies 1:59 A.M.. In the job for the full backup, Jill specifies 3:59 A.M..

  13. In New Job, choose OK to finish creating the scheduled job.

        

Create a maintenance plan for full backups              


            

After you create a scheduled job for marking the databases, you can use the Maintenance Plan Wizard to schedule full backups of all of the databases that your deployment of TFS uses.

Important note                     Important                  

If your deployment is using the Enterprise or Datacenter editions of SQL Server, but you think you might want to restore databases to a server running Standard edition, you must use a backup set that was made with SQL Server compression disabled. Unless you disable data compression, you will not be able to successfully restore Enterprise or Datacenter edition databases to a server running Standard edition. You should turn off compression before creating your maintenance plans. To turn off compression, follow the steps in the Microsoft Knowledge Base article.

To create a maintenance plan for full backups

  1. In SQL Server Management Studio, expand the Management node, open the Maintenance Plans sub-menu, and then choose Maintenance Plan Wizard.

  2. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

    The Select Plan Properties page appears.

  3. In the Name box, specify a name for the maintenance plan.

    For example, Jill decides to create a plan for full backups named TfsFullDataBackup.

  4. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  5. Under Frequency and Daily Frequency, specify options for your plan. For example, Jill specifies a weekly backup to occur on Sunday in Frequency, and specifies 4 A.M. in Daily Frequency.

    Under Duration, leave the default value, No end date. Choose OK, and then choose Next.

  6. On the Select Maintenance Tasks page, select the Backup Database (Full), Execute SQL Server Agent Job, and Back up Database (Transaction Log) check boxes, and then choose Next.

  7. On the Select Maintenance Task Order page, change the order so that the full backup runs first, then the Agent job, and then the transaction log backup, and then choose Next.

    For more information about this dialog box, choose the F1 key. Also, search for Maintenance Plan Wizard on the following page of the Microsoft website: SQL Server Books Online.

  8. On the Define Back Up Database (Full) Task page, choose the down arrow, choose All Databases, and then choose OK.

  9. Specify the backup options for saving the files to disk or tape, as appropriate for your deployment and resources, and then choose Next.

  10. On the Define Execute SQL Server Agent Job Task page, select the check box for the scheduled job that you created for table marking, and then choose Next.

  11. On the Define Back Up Database (Transaction Log) Task page, choose the down arrow, choose All Databases, and then choose OK.

  12. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  13. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  14. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the databases that you specified based on the frequency that you specified.

        

            

You can use the Maintenance Plan Wizard to schedule differential backups for all databases that your deployment of TFS uses.

Important note                     Important                  

SQL Server Express does not include the Maintenance Plan Wizard. You must manually script the schedule for your differential backups. For more information, see the following topic on the Microsoft website: How to: Create a Differential Database Backup (Transact-SQL).

To create a maintenance plan for differential backups

  1. Log on to the server that is running the instance of SQL Server that contains the databases that you want to back up.

  2. Choose Start, choose All Programs, choose Microsoft SQL Server 2008, and then choose SQL Server Management Studio.

    1. In the Server type list, choose Database Engine.

    2. In the Server name and Authentication lists, choose the appropriate server and authentication scheme.

    3. If your instance of SQL Server requires it, in User name and Password, specify the credentials of an appropriate account.

    4. Choose Connect.

  3. In SQL Server Management Studio, expand the Management node, open the sub-menu, choose Maintenance Plans, and then choose Maintenance Plan Wizard.

  4. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

  5. On the Select Plan Properties page, in the Name box, specify a name for the maintenance plan.

    For example, you could name a plan for differential backups TfsDifferentialBackup.

  6. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  7. Under Frequency and Daily Frequency, specify options for your backup plan.

    Under Duration, leave the default value, No end date. Choose OK, and then choose Next.

  8. On the Select Maintenance Tasks page, select the Back up Database (Differential) check box, and then choose Next.

  9. On the Define Back Up Database (Differential) Task page, choose the down arrow, choose All Databases, and then choose OK.

  10. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  11. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  12. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the databases that you specified based on the frequency that you specified.

        

            

You can use the Maintenance Plan Wizard to schedule transaction log backups for all databases that your deployment of TFS uses.

Important note                     Important                  

SQL Server Express does not include the Maintenance Plan Wizard. You must manually script the schedule for transaction-log backups. For more information, see the following topic on the Microsoft website: How to: Create a Transaction Log Backup (Transact-SQL).

To create a maintenance plan for transaction log backups

  1. Log on to the server that is running the instance of SQL Server that contains the databases that you want to back up.

  2. Choose Start, choose All Programs, choose Microsoft SQL Server 2008, and then choose SQL Server Management Studio.

  3. In the Server type list, choose Database Engine.

    1. In the Server name and Authentication lists, choose the appropriate server and authentication scheme.

    2. If your instance of SQL Server requires it, in User name and Password, specify the credentials of an appropriate account.

    3. Choose Connect.

  4. In SQL Server Management Studio, expand the Management node, open the submenu, choose Maintenance Plans, and then choose Maintenance Plan Wizard.

  5. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

    The Select Plan Properties page appears.

  6. In the Name box, specify a name for the maintenance plan.

    For example, you could name a plan to back up transaction logs TfsTransactionLogBackup.

  7. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  8. Under Frequency and Daily Frequency, specify options for your plan.

    Under Duration, leave the default value, No end date.

  9. Choose OK, and then choose Next.

  10. On the Select Maintenance Tasks page, select the Execute SQL Server Agent Job and Back up Database (Transaction Log) check boxes, and then choose Next.

  11. On the Select Maintenance Task Order page, change the order so that the Agent job runs before the transaction-log backup, and then choose Next.

    For more information about this dialog box, choose the F1 key. Also, search for Maintenance Plan Wizard on the following page of the Microsoft website: SQL Server Books Online.

  12. On the Define Execute SQL Server Agent Job Task page, select the check box for the scheduled job that you created for table marking, and then choose Next.

  13. On the Define Back Up Database (Transaction Log) Task page, choose the down arrow, choose All Databases, and then choose OK.

  14. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  15. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  16. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the transaction logs for the databases that you specified based on the frequency that you specified.

            

You must back up the encryption key for Reporting Services as part of backing up your system. Without this encryption key, you will not be able to restore the reporting data. For a single-server deployment of TFS, you can back up the encryption key for SQL Server Reporting Services by using the Reporting Services Configuration tool. You could also choose to use the RSKEYMGMT command-line tool, but the configuration tool is simpler. For more information about RSKEYMGMT, see the following page on the Microsoft website: RSKEYMGMT Utility.

To back up the encryption key by using the Reporting Services Configuration tool

  1. On the server that is running Reporting Services, choose Start, point to All Programs, point to Microsoft SQL Server, point to Configuration Tools, and then choose Reporting Services Configuration Manager.

    The Report Server Installation Instance Selection dialog box opens.

  2. Specify the name of the data-tier server and the database instance, and then choose Connect.

  3. In the navigation bar on the left side, choose Encryption Keys, and then choose Backup.

    The Encryption Key Information dialog box opens.

  4. In File Location, specify the location where you want to store a copy of this key.

    You should consider storing this key on a separate computer from the one that is running Reporting Services.

  5. In Password, specify a password for the file.

  6. In Confirm Password, specify the password for the file again, and then choose OK.

        

            

Unlike Team Foundation Server, which uses the scheduling tools in SQL Server Management Studio, there is no built-in scheduling system for backups in SharePoint Foundation 2010, and SharePoint specifically recommends against any scripting that marks or alters its databases. To schedule backups so that they occur at the same time as the backups for TFS, SharePoint Foundation 2010 guidance recommends that you create a backup script by using Windows PowerShell, and then use Windows Task Scheduler to run the backup script at the same time as your scheduled backups of TFS databases. This will help you keep your database backups in sync.

Important note                     Important                  

Before proceeding with the procedures below, you should review the latest guidance for SharePoint Foundation 2010. The procedures below are based on that guidance, but might have become out of date. Always follow the latest recommendations and guidance for the version of SharePoint Products you use when managing that aspect of your deployment. For more information, see the links included with each of the procedures in this section.

To create scripts to perform full and differential backups of the farm in SharePoint Foundation 2010

  1. Open a text editor, such as Notepad.

  2. In the text editor, type the following, where BackupFolder is the UNC path to a network share where you will back up your data:

     
    Backup-SPFarm -Directory BackupFolder -BackupMethod Full
    
    TipTip

    There are a number of other parameters you could use when backing up the farm. For more information, see Back up a farm and Backup-SPFarm.

  3. Save the script as a .PS1 file. Consider giving the file an obvious name, such as “SharePointFarmFullBackupScript.PS1” or some meaningful equivalent.

  4. Open a new file, and create a second backup file, only this time specifying a differential backup:

     
    Backup-SPFarm -Directory BackupFolder -BackupMethod Differential
    
  5. Save the script as a .PS1 file. Consider giving the file an obvious name, such as “SharePointFarmDiffBackupScript.PS1”.

    Important note                           Important                        

    By default, PowerShell scripts will not execute on your system unless you have changed PowerShell’s execution policy to allow scripts to run. For more information, see Running Windows PowerShell Scripts.

After you have created your scripts, you must schedule them to execute following the same schedule and frequency as the schedule you created for backing up Team Foundation Server databases. For example, if you scheduled differential backups to execute daily at 2 A.M., and full backups to occur on Sundays at 4 A.M., you will want to follow the exact same schedule for your farm backups.

To schedule your backups, you must use Windows Task Scheduler. In addition, you must configure the tasks to run using an account with sufficient permissions to read and write to the backup location, as well as permissions to execute backups in SharePoint Foundation 2010. Generally speaking, the simplest way to do this is to use a farm administrator account, but you can use any account as long as all of the following criteria are met:

  • The account specified in Windows Task Scheduler is an administrative account.

  • The account specified for the Central Administration application pool and the account you specify for running the task have read/write access to the backup location.

  • The backup location is accessible from the server running SharePoint Foundation 2010, SQL Server, and Team Foundation Server.

To schedule backups for the farm

  1. Choose Start, choose Administrative Tools, and then choose Task Scheduler.

  2. In the Actions pane, choose Create Task.

  3. On the General tab, in Name, specify a name for this task, such as “Full Farm Backup.” In Security options, specify the user account under which to run the task if it is not the account you are using. Then choose Run whether user is logged on or not, and select the Run with highest privileges check box.

  4. On the Actions tab, choose New.

    In the New Action window, in Action, choose Start a program. In Program/script, specify the full path and file name of the full farm backup .PS1 script you created, and then choose OK.

  5. On the Triggers tab, choose New.

    In the New Trigger window, in Settings, specify the schedule for performing the full backup of the farm. Make sure that this schedule exactly matches the schedule for full backups of the Team Foundation Server databases, including the recurrence schedule, and then choose OK.

  6. Review all the information in the tabs, and then choose OK to create the task for the full backup for the farm.

  7. In the Actions pane, choose Create Task.

  8. On the General tab, in Name, specify a name for this task, such as “Differential Farm Backup.” In Security options, specify the user account under which to run the task if it is not the account you are using, choose Run whether user is logged on or not, and select the Run with highest privileges check box.

  9. On the Actions tab, choose New.

    In the New Action window, in Action, choose Start a program. In Program/script, specify the full path and file name of the differential farm backup .PS1 script you created, and then choose OK.

  10. On the Triggers tab, choose New.

    In the New Trigger window, in Settings, specify the schedule for performing the full backup of the farm. Make sure that this schedule exactly matches the schedule for full backups of the Team Foundation Server databases, including the recurrence schedule, and then choose OK.

  11. Review all the information in the tabs, and then choose OK to create the task for the differential backup for the farm.

  12. In Active Tasks, refresh the list and make sure that your new tasks are scheduled appropriately, and then close Task Scheduler. For more information about creating and scheduling tasks in Task Scheduler, see Task Scheduler How To.

Home | Prepare for Installation | Install Prerequisites and Team Foundation Server | Configure Team Foundation Server to Support Your Development Teams | Create Back Up Schedule and Plan

        

            

If you use Visual Studio Lab Management in your deployment of Team Foundation Server, you must also back up each machine and component that Lab Management uses. The hosts for the virtual machines and the SCVMM library servers are separate physical computers that are not backed up by default. You must specifically include them when you plan your backup and restoration strategies. The following table summarizes what you should back up whenever you back up Team Foundation Server.

 

Machine                     

Component                    

Server that is running System Center Virtual Machine Manager 2008 (SCVMM) R2

  • SQL Server database (user accounts, configuration data)

Physical host for the virtual machines

  • Virtual machines (VMs)

  • Templates

  • Host configuration data (virtual networks)

SCVMM library server

  • Virtual machines

  • Templates

  • Virtual hard disks (VHDs)

  • ISO images

The following table contains tasks and links to procedural or conceptual information about how to back up the additional machines for an installation of Lab Management. You must perform the tasks in the order shown, without skipping any tasks.

To back up the machines that are running any SCVMM components, you must be a member of the Backup Operators group on each machine.

 

Common Tasks                    

Detailed instructions                    

  1. Back up the server that is running System Center Virtual Machine Manager 2008 R2.

  2. Back up the library servers for SCVMM.

  3. Back up each physical host for the virtual machines.

See Also              


            

Tasks

Restore Data to the Same Location                           

Other Resources

Managing Data                           
Managing Team Foundation Server Data-Tier Servers                           
Managing Team Foundation Server                           
Back Up the Reporting Services Encryption Key                           

ADO.NET

Microsoft ADO .NET Step by Step

by Rebecca M. Riordan ISBN: 0735612366

Microsoft Press © 2002 (512 pages)

Learn to use the ADO.NET model to expand on data-bound Windows and

Web Forms, as well as how XML and ADO.NET intermingle.

Table of Contents

Microsoft ADO.NET Step by Step

Introduction

Part I – Getting Started with ADO.NET

Chapter 1 – Getting Started with ADO.NET

Part II – Data Providers

Chapter 2 – Creating Connections

Chapter 3 – Data Commands and the DataReader

Chapter 4 – The DataAdapter

Chapter 5 – Transaction Processing in ADO.NET

Part III – Manipulating Data

Chapter 6 – The DataSet

Chapter 7 – The DataTable

Chapter 8 – The DataView

Part IV – Using the ADO.NET Objects

Chapter 9 – Editing and Updating Data

Chapter 10 – ADO.NET Data-Binding in Windows Forms

Chapter 11 – Using ADO.NET in Windows Forms

Chapter 12 – Data-Binding in Web Forms

Chapter 13 – Using ADO.NET in Web Forms

Part V – ADO.NET and XML

Chapter 14 – Using the XML Designer

Chapter 15 – Reading and Writing XML

Chapter 16 – Using ADO in the .NET Framework

Index

List of Tables

List of Sidebars

Microsoft ADO.NET Step by Step

PUBLISHED BY

Microsoft Press

A Division of Microsoft Corporation

One Microsoft Way

Redmond, Washington 98052-6399

Copyright © 2002 by Rebecca M. Riordan

All rights reserved. No part of the contents of this book may be reproduced or transmitted

in any form

or by any means without the written permission of the publisher.

Microsoft ADO.Net – Step by Step 1

Library of Congress Cataloging-in-Publication Data

Riordan, Rebecca.

Microsoft ADO.NET Step by Step / Rebecca M. Riordan.

p. cm.

Includes index.

ISBN 0-7356-1236-6

1. Database design. 2. Object oriented programming (Computer

science) 3. ActiveX. I.

Title.

QA76.9.D26 R56 2002

005.75’85—dc21 2001054641

Printed and bound in the United States of America.

1 2 3 4 5 6 7 8 9 QWE 7 6 5 4 3 2

Distributed in Canada by Penguin Books Canada Limited.

A CIP catalogue record for this book is available from the British Library.

Microsoft Press books are available through booksellers and distributors worldwide. For

further information about international editions, contact your local Microsoft Corporation

office or contact Microsoft Press International directly at fax (425) 936-7329. Visit our

Web site at http://www.microsoft.com/mspress. Send comments to mspinput@microsoft.com.

ActiveX, IntelliSense, Internet Explorer, Microsoft, Microsoft Press, the .NET logo, Visual

Basic, Visual C#, and Visual Studio are either registered trademarks or trademarks of

Microsoft Corporation in the United States and/or other countries. Other product and

company names mentioned herein may be the trademarks of their respective owners.

The example companies, organizations, products, domain names, e-mail addresses,

logos, people, places, and events depicted herein are fictitious. No association with any

real company, organization, product, domain name, e-mail address, logo, person, place,

or event is intended or should be inferred.

Acquisitions Editor: Danielle Bird

Project Editor: Rebecca McKay

Body Part No. X08-05018

To my very dear friend, Stephen Jeffries

About the Author

Rebecca M. Riordan

With almost 20 years’ experience in software design, Rebecca M. Riordan has earned

an international reputation as an analyst, systems architect, and designer of database

and work -support systems.

She works as an independent consultant, providing systems design and consulting

expertise to an international client base. In 1998, she was awarded MVP status by

Microsoft in recognition of her work in Internet newsgroups. Microsoft ADO.NET Step by

Step is her third book for Microsoft Press.

Rebecca currently resides in New Mexico. She can be reached at

rebeccar@attglobal.net.

Introduction

Overview

ADO.NET is the data access component of Microsoft’s new .NET Framework. Microsoft

bills ADO.NET as “an evolutionary improvement” over previous versions of ADO, a claim

that has been hotly debated since its announcement. It is certainly true that the

ADO.NET object model bears very little relationship to earlier versions of ADO.

Microsoft ADO.Net – Step by Step 2

In fact, whether you decide to love it or hate it, one fact about the .NET Framework

seems undeniable: it levels the playing ground. Whether you’ve been at this computer

game longer than you care to talk about or you’re still sorting out your heaps and stacks,

learning the .NET Framework will require a major investment. We’re all beginners now.

So welcome to Microsoft ADO.NET Step by Step. Through the exercises in this book, I

will introduce you to the ADO.NET object model, and you’ll learn how to use that model

in developing data-bound Windows Forms and Web Forms. In later topics, we’ll look at

how ADO.NET interacts with XML and how to access older versions of ADO from the

.NET environment.

Since we’re all beginners, an exhaustive treatment would be, well, exhausting, so this

book is necessarily limited in scope. My goal is to provide you with an understanding of

the ADO.NET objects—what they are and how they work together. So fair warning: this

book will not make you an expert in ADO.NET. (How I wish it were that simple!)

What this book will give you is a road map, a fundamental understanding of the

environment, from which you will be able to build expertise. You’ll know what you need to

do to start building data applications. The rest will come with time and experience. This

book is a place to start.

Although I’ve pointed out language differences where they might be confusing, in order

to keep the book within manageable proportions I’ve assumed that you are already

familiar with Visual Basic .NET or Visual C# .NET. If you’re completely new to the .NET

environment, you might want to start with Microsoft Visual Basic .NET Step by Step by

Michael Halvorson (Microsoft Press, 2002) or Microsoft Visual C# .NET Step by Step by

John Sharp and Jon Jagger (Microsoft Press, 2002), depending on your language of

choice.

The exercises that include programming are provided in both Microsoft Visual Basic and

Microsoft C#. The two versions are identical (except for the difference between the

languages), so simply choose the exercise in the language of your choice and skip the

other version.

Conventions and Features in This Book

You’ll save time by understanding, before you start the lessons, how this book displays

instructions, keys to press, and so on. In addition, the book provides helpful features that

you might want to use.

§ Numbered lists of steps (1, 2, and so on) indicate hands-on exercises. A

rounded bullet indicates an exercise that has only one step.

§ Text that you are to type appears in bold.

§ Terms are displayed in italic the first time they are defined.

§ A plus sign (+) between two key names means that you must press those

keys at the same time. For example, “Press Alt+Tab” means that you hold down

the Alt key while you press Tab.

§ Notes labeled “tip” provide additional information or alternative methods for a

step.

§ Notes labeled “important” alert you to essential information that you should

check before continuing with the lesson.

§ Notes labeled “ADO” point out similarities and differences between ADO and

ADO.NET.

§ Notes labeled “Roadmap” refer to places where topics are discussed in depth.

§ You can learn special techniques, background information, or features related

to the information being discussed by reading the shaded sidebars that appear

throughout the lessons. These sidebars often highlight difficult terminology or

suggest future areas for exploration.

§ You can get a quick reminder of how to perform the tasks you learned by

reading the Quick Reference at the end of a lesson.

Microsoft ADO.Net – Step by Step 3

Using the ADO.NET Step by Step CD-ROM

The Microsoft ADO.NET Step by Step CD-ROM inside the back cover contains practice

files that you’ll use as you complete the exercises in the book. By using the files, you

won’t need to waste time creating databases and entering sample data. Instead, you can

concentrate on how to use ADO.NET. With the files and the step-by-step instructions in

the lessons, you’ll also learn by doing, which is an easy and effective way to acquire and

remember new skills.

System Requirements

In order to complete the exercises in this book, you will need the following software:

§ Microsoft Windows 2000 or Microsoft Windows XP

§ Microsoft Visual Studio .NET

§ Microsoft SQL Server Desktop Engine (included with Visual Studio .NET)

or Microsoft SQL Server 2000

This book and practice files were tested primarily using Windows 2000 and Visual Studio

.NET Professional; however, other editions of Visual Studio .NET, such as Visual Basic

.NET Standard and Visual C# .NET Standard, should also work.

Since Windows XP Home Edition does not include Internet Information Services (IIS),

you won’t be able to create local ASP.NET Web applications (discussed in chapters 12

and 13) using Windows XP Home Edition. Windows 2000 and Windows XP Professional

do include IIS.

Installing the Practice Files

Follow these steps to install the practice files on your computer so that you can use them

with the exercises in this book.

1. Insert the CD in your CD-ROM drive.

A Start menu should appear automatically. If this menu does not appear,

double-click StartCD.exe at the root of the CD.

2. Click the Getting Started option.

3. Follow the instructions in the Getting Started document to install the

practice files and setup SQL Server 2000 or the Microsoft SQL Server

Desktop Engine (MSDE).

Using the Practice Files

The practice files contain the projects and completed solutions for the ADO.NET Step by

Step book. Folders marked ‘Finish’ contain working solutions. Folders marked ‘Start’

contain the files needed to perform the exercises in the book.

Uninstalling the Practice Files

Follow these steps to remove the practice files from your computer.

1. Insert the CD in your CD-ROM drive.

A Start menu should appear automatically. If this menu does not appear,

double-click StartCD.exe at the root of the CD.

2. Click the Uninstall Practice Files option.

3. Follow the steps in the Uninstall Practice Files document to remove

the practice files.

Need Help with the Practice Files?

Every effort has been made to ensure the accuracy of the book and the contents of this

CD-ROM. As corrections or changes are collected for this book, they will be placed on a

Web page and any errata will also be integrated into the Microsoft online Help tool

known as the Knowledge Base. To view the list of known corrections for this book, visit

the following page:

http://support.microsoft.com/support/misc/kblookup.asp?id=Q314759

Microsoft ADO.Net – Step by Step 4

To search the Knowledge Base and review your support options for the book or CDROM,

visit the Microsoft Press Support site:

http://www.microsoft.com/mspress/support/

If you have comments, questions, or ideas regarding the book or this CD-ROM, or

questions that are not answered by searching the Knowledge Base, please send them to

Microsoft Press via e-mail to:

mspinput@microsoft.com

or by postal mail to:

Microsoft Press

Attn: Microsoft ADO.NET Step by Step Editor

One Microsoft Way

Redmond, WA 98052-6399

Please note that product support is not offered through the above addresses.

Part I: Getting Started with ADO.NET

Chapter List

Chapter 1: Getting Started with ADO.NET

Chapter 1: Getting Started with ADO.NET

Overview

In this chapter, you’ll learn how to:

§ Identify the primary objects that make up Microsoft ADO.NET are and how

they interact

§ Create Connection and DataAdapter objects by using the DataAdapter

Configuration Wizard

§ Automatically generate a DataSet

§ Bind control properties to a DataSet

§ Load data into a DataSet at run time

Like other components of the .NET Framework, ADO.NET consists of a set of objects

that interact to provide the required functionality. Unfortunately, this can make learning to

use the object model frustrating—you feel like you need to learn all of it before you can

understand any of it.

The solution to this problem is to start by building a conceptual framework. In other

words, before you try to learn the details of how any particular object functions, you need

to have a general understanding of what each object does and how the objects interact.

That’s what we’ll do in this chapter. We’ll start by looking at the main ADO.NET objects

and how they work together to get data from a physical data store, to the user, and back

again. Then, just to whet your appetite, we’ll work through building a set of objects and

binding them to a simple data form.

On the Fundamental Interconnectedness of All Things

In later chapters in this section, we’ll examine each object in the ADO.NET object model

in turn. At least in theory. In reality, because the objects are so closely interlinked, it’s

impossible to look at any single object in isolation.

Microsoft ADO.Net – Step by Step 5

Roadmap A roadmap note like this will point you to the discussion of a

property or method that hasn’t yet been introduced.

Where it’s necessary to use a method or property that we haven’t yet examined, I’ll use

roadmap notes, like the one in the margin next to this paragraph, to point you to the

chapter where they are discussed.

The ADO.NET Object Model

The figure below shows a simplified view of the primary objects in the ADO.NET object

model. Of course, the reality of the class library is more complicated, but we’ll deal with

the intricacies later. For now, it’s enough to understand what the primary objects are and

how they typically interact.

The ADO.NET classes are divided into two components: the Data Providers (sometimes

called Managed Providers), which handle communication with a physical data store, and

the DataSet, which represents the actual data. Either component can communicate with

data consumers such as WebForms and WinForms.

Data Providers

The Data Provider components are specific to a data source. The .NET Framework

includes two Data Providers: a generic provider that can communicate with any OLE DB

data source, and a SQL Server provider that has been optimized for Microsoft SQL

Server versions 7.0 and later. Data Providers for other databases such as Oracle and

DB2 are expected to become available, or you can write your own. (You may be relieved

to know that we won’t be covering the creation of Data Providers in this book.)

The two Data Providers included in the .NET Framework contain the same objects,

although their names and some of their properties and methods are different. To

illustrate, the SQL Server provider objects begin with SQL (for example,

SQLConnection), while the OLE DB objects begin with OleDB (for example,

OleDbConnection).

The Connection object represents the physical connection to a data source. Its

properties determine the data provider (in the case of the OLE DB Data Provider), the

data source and database to which it will connect, and the string to be used during

connecting. Its methods are fairly simple: You can open and close the connection,

change the database, and manage transactions.

The Command object represents a SQL statement or stored procedure to be executed at

the data source. Command objects can be created and executed independently against

a Connection object, and they are used by DataAdapter objects to handle

communications from a DataSet back to a data source. Command objects can support

SQL statements and stored procedures that return single values, one or more sets of

rows, or no values at all.

Microsoft ADO.Net – Step by Step 6

A DataReader is a fast, low-overhead object for obtaining a forward-only, read-only

stream of data from a data source. They cannot be created directly in code; they are

created only by calling the ExecuteReader method of a Command.

The DataAdapter is functionally the most complex object in a Data Provider. It provides

the bridge between a Connection and a DataSet. The DataAdapter contains four

Command objects: the SelectCommand, UpdateCommand, InsertCommand, and

DeleteCommand. The DataAdapter uses the SelectCommand to fill a DataSet and uses

the remaining three commands to transmit changes back to the data source, as required.

Microsoft ActiveX

Data Objects

(ADO)

In functional terms, the Connection and Command

objects are roughly equivalent to their ADO

counterparts (the major difference being the lack of

support for server-side cursors), while the

DataReader functions like a firehose cursor. The

DataAdapter and DataSet have no real equivalent in

ADO.

DataSets

The DataSet is a memory-resident representation of data. Its structure is shown in the

figure below. The DataSet can be considered a somewhat simplified relational database,

consisting of tables and their relations. It’s important to understand, however, that the

DataSet is always disconnected from the data source—it doesn’t “know” where the data

it contains came from, and in fact, it can contain data from multiple sources.

The DataSet is composed of two primary objects: the DataTableCollection and the

DataRelationCollection. The DataTableCollection contains zero or more DataTable

objects, which are in turn made up of three collections: Columns, Rows, and Constraints.

The DataRelationCollection contains zero or more DataRelations.

The DataTable’s Columns collection defines the columns that compose the DataTable.

In addition to ColumnName and DataType properties, a DataColumn’s properties allow

you to define such things as whether or not it allows nulls (AllowDBNull), its maximum

length (MaxLength), and even an expression that is used to calculate its value

(Expression).

The DataTable’s Rows collection, which may be empty, contains the actual data as

defined by the Columns collection. For each Row, the DataTable maintains its original,

current, and proposed values. As we’ll see, this ability greatly simplifies certain kinds of

programming tasks.

Microsoft ADO.Net – Step by Step 7

ADO The ADO.NET DataTable provides essentially the same

functionality as the ADO Recordset object, although it obviously

plays a very different role in the object model.

The DataTable’s Constraints collection contains zero or more Constraints. Just as in a

relational database, Constraints are used to maintain the integrity of the data. ADO.NET

supports two types of constraints: ForeignKeyConstraints, which maintain relational

integrity (that is, they ensure that a child row cannot be orphaned), and

UniqueConstraints, which maintain data integrity (that is, they ensure that duplicate rows

cannot be added to the table). In addition, the PrimaryKey property of the DataTable

ensures entity integrity (that is, it enforces the uniqueness of each row).

Finally, the DataSet’s DataRelationCollection contains zero or more DataRelations.

DataRelations provide a simple programmatic interface for navigating from a master row

in one table to the related rows in another. For example, given an Order, a DataRelation

allows you to easily extract the related OrderDetails rows. (Note, however, that the

DataRelation itself doesn’t enforce relational integrity. A Constraint is used for that.)

Binding Data to a Simple Windows Form

The process of connecting data to a form is called data binding. Data binding can be

performed in code, but the Microsoft Visual Studio .NET designers make the process

very simple. In this chapter, we’ll use the designers and the wizards to quickly create a

simple data bound Windows form.

Important If you have not yet installed this book’s practice files, work

through “Installing and Using the Practice Files” in the

Introduction, and then return to this chapter.

Adding a Connection and DataAdapter to a Form

Roadmap We’ll examine the Connection object in Chapter 2 and the

DataAdapter in Chapter 4.

The first step in binding data is to create the Data Provider objects. Visual Studio

provides a DataAdapter Configuration Wizard to make this process simple. Once the

DataAdapter has been added, you can check that its configuration is correct by using the

DataAdapter Preview window within Visual Studio.

Add a Connection to a Windows Form

1. Open the EmployeesForm project from the Visual Studio Start Page.

2. Double-click Employees.vb (or Employees.cs if you’re using C#) in the

Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

3. Drag a SQLDataAdapter onto the form from the Data tab of the

Toolbox.

Visual Studio displays the first page of the DataAdapter Configuration Wizard.

Microsoft ADO.Net – Step by Step 8

4. Click Next.

The DataAdapter Configuration Wizard displays a page asking you to choose

a connection.

5. Click New Connection.

The Data Link Properties dialog box opens.

Microsoft ADO.Net – Step by Step 9

6. Specify the name of your server, the appropriate logon information,

select the Northwind database, and then click Test Connection.

The DataAdapter Configuration Wizard displays a message indicating that the

connection was successful.

Tip If you’re unsure how to complete step 6, check with your system

administrator.

7. Click OK to close the message, click OK to close the Data Link

Properties dialog box, and then click Next to display the next page of

the DataAdapter Configuration Wizard.

The DataAdapter Configuration Wizard displays a page requesting that you

choose a query type.

Microsoft ADO.Net – Step by Step 10

8. Verify that the Use SQL statements option is selected, and then click

Next.

The DataAdapter Configuration Wizard displays a page requesting the SQL

statement(s) to be used.

9. Click Query Builder.

The DataAdapter Configuration Wizard opens the Query Builder and displays

the Add Table dialog box.

Microsoft ADO.Net – Step by Step 11

10. Select the Employees table, click Add, and then click Close.

The Add Table dialog box closes, and the Employees table is added to the

Query Builder.

11. Add the following fields to the query by selecting the check box next to

the field name in the top pane: EmployeeID, LastName, FirstName,

Title, TitleOfCourtesy, HireDate, Notes.

The Query Builder creates the SQL command.

Microsoft ADO.Net – Step by Step 12

12. Click OK to close the Query Builder, and then click Next.

The DataAdapter Configuration Wizard displays a page showing the results of

adding the Connection and DataAdapter objects to the form.

13. Click Finish to close the DataAdapter Configuration Wizard.

The DataAdapter Configuration Wizard creates and configures a

SQLDataAdapter and a SQLConnection, and then adds them to the

Component Designer.

Creating DataSets

Roadmap We’ll examine the DataSet in Chapter 6.

Microsoft ADO.Net – Step by Step 13

The Connection and DataAdapter objects handle the physical communication with the

data store, but you must also create a memory -resident representation of the actual data

that will be bound to the form. You can bind a control to almost any structure that

contains data, including arrays and collections, but you’ll typically use a DataSet.

As with the Data Provider objects, Visual Studio provides a mechanism for automating

this process. In fact, it can be done with a simple menu choice, although because Visual

Studio exposes the code it creates, you can further modify the basic DataSet

functionality that Visual Studio provides.

Create a DataSet

1. On the Data menu, choose Generate Dataset.

The Generate Dataset dialog box opens.

2. In the New text box, type dsEmployees.

Microsoft ADO.Net – Step by Step 14

3. Click OK.

Visual Studio creates the DataSet class and adds an instance of it to the

bottom pane of the forms designer.

Simple Binding Controls to a DataSet

The .NET Framework supports two kinds of binding: simple and complex. Simple binding

occurs when a single data element, such as a date, is bound to a control. Complex

binding occurs when a control is bound to multiple data values, for example, binding a

list box to a DataSet that contains a list of Order Numbers.

Roadmap We’ll examine simple and complex data binding in more

detail in Chapters 10 and 11.

Almost any property of a control can support simple binding, but only a subset of

Windows and WebForms controls (such as DataGrids and ListBoxes) can support

complex binding.

Bind the Text Property of a Control to a DataSet

1. Click the txtTitle text box in the forms designer to select it.

2. Click the plus sign next to DataBindings to expand the DataBindings

properties.

3. Click the drop-down arrow for the Text property.

Visual Studio displays a list of available data sources.

4. In the list of available data sources for the Text property, click the plus

sign next to the DsEmployees1 data source, and then click the plus

sign next to the Employees DataTable.

Microsoft ADO.Net – Step by Step 15

5. Click the TitleOfCourtesy column to select it.

6. Repeat steps 1 through 5 to bind the Text property of the remaining

controls to the columns of the Employees DataTable, as shown in the

following table.

Control DataTable

Column

lblEmployeeID EmployeeID

txtGivenName FirstName

txtSurname LastName

txtHireDate HireDate

txtPosition Title

txtNotes Notes

Loading Data into the DataSet

We now have all the components in place for manipulating the data from our data

source, but we have one task remaining: We must actually load the data into the

DataSet.

If you’re used to working with data bound forms in environments such as Microsoft

Access, it may come as a surprise that you need to do this manually. Remember,

however, that the ADO.NET architecture has been designed to operate without a

permanent connection to the database. In a disconnected environment, it’s appropriate,

Microsoft ADO.Net – Step by Step 16

and indeed necessary, that the management of the connection be under programmatic

control.

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

The DataAdapter’s Fill method is used to load data into the DataSet. The DataAdapter

provides several versions of the Fill method. The simplest version takes the name of a

DataSet as a parameter, and that’s the one we’ll use in the exercise below.

Load Data into the DataSet

Visual Basic .NET

1. Press F7 to view the code for the form.

2. Expand the region labeled “Windows Form Designer generated code”

and navigate to the New Sub.

3. Add the following line of code just before the end of the procedure:

SqlDataAdapter1.Fill(DsEmployees1)

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

This line calls the DataAdapter’s Fill method, passing the name of the

DataSet to be filled.

4. Press F5 to build and run the program.

Visual Studio displays the form with the first row displayed.

5. Admire your data bound form for a few minutes (see, that wasn’t so

hard!), and then close the form.

Visual C# .NET

1. Press F7 to view the code for the form.

2. Add the following line of code to the end of the Employees procedure:

sqlDataAdapter1.Fill(dsEmployees1);

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

This line calls the DataAdapter’s Fill method, passing the name of the

DataSet to be filled.

3. Press F5 to build and run the program.

Visual Studio displays the form with the first row displayed.

Microsoft ADO.Net – Step by Step 17

4. Admire your data bound form for a few minutes (see, that wasn’t so

hard!), and then close the form.

Chapter 1 Quick Reference

To Do this

Add a Connection and DataAdapter to a

form by using the DataAdapter

Configuration Wizard

Drag a DataAdapter object onto the

form and follow the wizard

instructions

Use Visual Studio to automatically

generate a typed DataSet

Select Create DataSet from the

Data menu, complete the Generate

Dataset dialog box as required, and

then click OK

Simple bind properties of a control to a

data source

In the Properties window

DataBindings section, select the

data source, DataTable, and

column

Load data into a DataSet Use the Fill method of the

DataAdapter. For example:

myDataAdapter.Fill(myDataS

et)

Part II: Data Providers

Chapter 2: Creating Connections

Chapter 3: Data Commands and the DataReader

Chapter 4: The DataAdapter

Chapter 5: Transaction Processing in ADO.NET

Chapter 2: Creating Connections

Overview

In this chapter, you’ll learn how to:

§ Add an instance of a Server Explorer Connection to a form

§ Create a Connection using code

§ Use Connection properties

§ Use an intermediary variable to reference multiple types of Connections

§ Bind Connection properties to form controls

§ Open and close Connections

Microsoft ADO.Net – Step by Step 18

§ Respond to a Connection.StateChange event

In the previous chapter, we took a brief tour through the ADO.NET object model. In this

chapter, we’ll begin to examine the objects in detail, starting with the lowest level object,

the Connection.

Understanding Connections

Connections are responsible for handling the physical communication between a data

store and a .NET application. Because the Connection object is part of a Data Provider,

each Data Provider implements its own version. The two Data Providers supported by

the .NET Framework implement the OleDbConnection in the System.Data.OleDB

namespace and the SqlConnection in the System.Data.SqlClient namespace,

respectively.

Note It’s important to understand that if you’re using a Connection

object implemented by another Data Provider, the details of the

implementation may vary from those described here.

The OleDbConnection, not surprisingly, uses OLE DB and can be used with any OLE DB

provider, including Microsoft SQL Server. The SqlConnection goes directly to SQL

Server without going through the OLE DB provider and so is more efficient.

Microsoft

ActiveX

Data

Objects

(ADO)

Since ADO.NET merges the ADO object model with OLE

DB, it is rarely necessary to go directly to OLE DB for

performance reasons. You might still need to use OLE DB

directly if you need specific functionality that isn’t exposed

by ADO.NET, but again, these situations are likely to be

rarer than when using ADO.

Creating Connections

In the previous chapter, we created a Connection object by using the DataAdapter

Configuration Wizard. The Data Form Wizard, accessed by clicking Add Windows Form

on the Project menu, also creates a Connection automatically. In this chapter, we’ll look

at several other methods for creating Connections in Microsoft Visual Studio .NET.

Design Time Connections

Visual Studio’s Server Explorer provides the ability, at design time, to view and maintain

connections to a number of different system services, including event logs, message

queues, and, most important for our purposes, data connections.

Important If you have not yet installed this book’s practice files, work

through ‘Installing and Using the Practice Files’ in the

Introduction and then return to this chapter.

Add a Design Time Connection to the Server Explorer

1. Open the Connection project from the Visual Studio start page or from

the Project menu.

2. Double-click ConnectionProperties.vb (or ConnectionProperties.cs, if

you’re using C#) in the Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 19

3. Open the Server Explorer.

4. Click the Connect to Database button.

Visual Studio displays the Data Link Properties dialog box.

Tip You can also display the Data Link Properties dialog box by choosing

Connect to Database on the Tools menu.

5. Click the Provider tab and then select Microsoft Jet 4.0 OLE DB

Provider.

Microsoft ADO.Net – Step by Step 20

6. Click Next.

Visual Studio displays the Connection tab of the dialog box.

7. Click the ellipsis button after Select or enter a database name,

navigate to the folder containing the sample files, and then select the

nwind sample database.

8. Click Open.

Visual Studio creates a Connection string for the database.

Microsoft ADO.Net – Step by Step 21

9. Click OK.

Visual Studio adds the Connection to the Server Explorer.

10. Right -click the Connection in the Server Explorer, click Rename from

the context menu, and then rename the Connection Access nwind.

Microsoft ADO.Net – Step by Step 22

Database References

In addition to Database Connections in the Server Explorer, Visual Studio also

supports Database References. Database References are set as part of a Database

Project, which is a special type of project used to store SQL scripts, Data Commands,

and Data Connections.

Database References are created in the Solution Explorer (rather than the Server

Explorer) and, unlike Database Connections defined in the Server Explorer, they are

stored along with the project.

Data connections defined through the Server Explorer become part of your Visual

Studio environment—they will persist as you open and close projects. Database

references, on the other hand, exist as part of a specific project and are only available

as part of the project.

Design time connections aren’t automatically included in any project, but you can drag a

design time connection from the Server Explorer to a form, and Visual Studio will create

a pre-configured Connection object for you.

Microsoft ADO.Net – Step by Step 23

Add an Instance of a Design Time Connection to a Form

§ Select the Access nwind Connection in the Server Explorer and drag it

onto the Connection Properties form.

Visual Studio adds a pre-configured OleDbConnection to the Component

Designer.

Creating a Connection at Run Time

Using Visual Studio to create form-level Connections is by far the easiest method, but if

you need a Connection that isn’t attached to a form, you can create one at run time in

code.

Note You wouldn’t ordinarily create a form-level Connection object in

code because the Visual Studio designers are easier and just as

effective.

The Connection object provides two overloaded versions of its constructor, giving you

the option of passing in the ConnectionString, as shown in Table 2-1.

Table 2-1: Connection Constructors

Method Description

New() Creates a

Connection

with the

ConnectionSt

ring property

set to an

empty string

New(ConnectionString) Creates a

Connection

with the

ConnectionSt

ring property

specified

The ConnectionString is used by the Connection object to connect to the data source.

We’ll explore it in detail in the next section of this chapter.

Create a Connection in Code

Visual Basic .NET

1. Display the code for the ConnectionProperties form by pressing F7.

2. Add the following lines after the Inherits statement:

3. Friend WithEvents SqlDbConnection1 As New _

Microsoft ADO.Net – Step by Step 24

System.Data.SqlClient.SqlConnection()

This code creates the new Connection object using the default values.

Visual C# .NET

1. Display the code for the ConnectionProperties form by pressing F7.

2. Add the following lines after the opening bracket of the class

declaration:

3. internal System.Data.SqlClient.SqlConnection

SqlDbConnection1;

This code creates the new Connection object. (For the time being, ignore the

warning that the variable is never assigned to.)

Using Connection Properties

The significant properties of the OleDbConnection and SqlDbConnection objects are

shown in Table 2-2 and Table 2-3, respectively.

Table 2-2: OleDbConnection Properties

Property Meaning Default

ConnectionString The string

used to

connect to

the data

source when

the Open

method is

executed

Empty

ConnectionTimeout The

maximum

time the

Connection

object will

continue

attempting to

make the

connection

before

throwing an

exception

15

second

s

Database The name of

the database

to be opened

once a

connection is

opened

Empty

DataSource The location

and file

containing

the database

Empty

Provider The name of

the OLE DB

Data

Provider

Empty

ServerVersion The version

of the server,

Empty

Microsoft ADO.Net – Step by Step 25

Table 2-2: OleDbConnection Properties

Property Meaning Default

as provided

by the OLE

DB Data

Provider

State A

ConnectionS

tate value

indicating the

current state

of the

Connection

Closed

Table 2-3: SqlConnection Properties

ConnectionString The string

used to

connect to

the data

source when

the Open

method is

executed

Empty

ConnectionTimeout The

maximum

time the

Connection

object will

continue

attempting to

make the

connection

before

throwing an

exception

15

secon

ds

Database The name of

the database

to be opened

once a

connection is

opened

Empty

DataSource The location

and file

containing

the database

Empty

PacketSize The size of

network

packets used

to

communicate

with SQL

Server

8192

bytes

ServerVersion The version

of SQL

Server being

Empty

Microsoft ADO.Net – Step by Step 26

used

State A

ConnectionS

tate value

indicating the

current state

of the

Connection

Closed

WorkStationID A string

identifying

the database

client, or, if

that is not

specified, the

name of the

workstation

Empty

As you can see, the two versions of the Connection object expose a slightly different set

of properties: The SqlDbConnection doesn’t have a Provider property, and the

OleDbConnection doesn’t expose PacketSize or WorkStationID. To make matters worse,

not all OLE DB Data Providers support all of the OleDbConnection properties, and if

you’re working with a custom Data Provider, all bets are off.

What this means in real terms is that we still can’t quite write code that is completely data

source-independent unless we’re prepared to give up the optimization of specific Data

Providers. However, as we’ll see, the problem isn’t as bad as it might at first seem, since

the .NET Framework provides a number of ways to accommodate run-time configuration.

Rather more tedious to deal with are the different names of the objects, but using an

intermediate variable can minimize the impact, as we’ll see later in this chapter.

The ConnectionString Property

The ConnectionString is the most important property of any Connection object. In fact,

the remaining properties are read-only and set by the Connection based on the value

provided for the ConnectionString.

All ConnectionStrings have the same format. They consist of a set of keywords and

values, with the pairs separated by semicolons, and the whole thing is delimited by either

single or double quotes:

“keyword = value;keyword = value;keyword = value”

Keyword names are case-insensitive, but the values may not be, depending on the data

source. The use of single or double quotes follows the normal rules for strings. For

example, if the database name is Becca’s Data, then the ConnectionString must be

delimited by double quotes: “Database=Becca’s Data”. ‘Database = Becca’s Data’ would

cause an error.

If you use the same keyword multiple times, the last instance will be used. For example,

given the ConnectionString “database=Becca’s Data; database=Northwind”, the initial

database will be set to Northwind. The use of multiple instances is perfectly legal; no

syntax error will be generated.

ADO Unlike ADO, the ConnectionString returned by the .NET

Framework is the same as the user-set string, with the exception

that the user name and password are returned only if Persist

Security Info is set to true (it is false by default).

Microsoft ADO.Net – Step by Step 27

Unfortunately, the format of the ConnectionString is the easy part. It’s determining the

contents that can be difficult because it will always be unique to the Data Provider. You

can always cheat (a little) by creating a design time connection using the Data Link

Properties dialog box, and then copying the values.

The ConnectionString can only be set when the Connection is closed. When it is set, the

Connection object will check the syntax of the string and then set the remaining

properties (which, you’ll remember, are read-only). The ConnectionString is fully

validated when the Connection is opened. If the Connection detects an invalid or

unsupported property, it will generate an exception (either an OleDbException or a

SqlDbException, depending on the object being used).

Setting a ConnectionString Property

In this exercise, we’ll set the ConnectionString for the SqlDbConnection that we created

in the previous exercise. The ConnectionString that your system requires will be different

from the one in my installation. (I have SQL Server installed locally, and my machine

name is BUNNY, for example.)

Fortunately, the DataAdapter Configuration Wizard in Chapter 1 created a design time

Connection for you. If you select that connection in the Server Explorer, you can see the

values in the Properties window. In fact, you can copy and paste the entire

ConnectionString from the Properties window if you want. (If you didn’t do the exercise in

Chapter 1, you can create a design time connection by using the technique described in

the Add a Design Time Connection exercise in this chapter.)

Set a ConnectionString Property

Visual Basic .NET

1. Expand the region labeled “Windows Form Designer generated code”

and navigate to the New Sub.

2. Add the following line to the procedure after the InitializeComponent

call, filling in the ConnectionString values required for your

implementation:

3. Me.SqlDbConnection1.ConnectionString = “<<add your

ConnectionString here>>”

Visual C# .NET

1. Scroll down to the ConnectionProperties Sub.

2. Add the following lines to the procedure after the InitializeComponent

call, filling in the ConnectionString values required for your

implementation:

3. this.SqlDbConnection1 = new

4. System.Data.SqlClient.SqlConnection();

5. this.SqlDbConnection1.ConnectionString =

“<<add your ConnectionString here>>”;

Using Other Connection Properties

With the Connection objects in place, we can now add the code to display the

Connection properties on the sample form. But first, we need to use a little bit of objectoriented

sleight of hand in order to accommodate the two different types of objects.

One method would be to write conditional code. In Visual Basic, this would look like:

If Me.rbOleChecked then

Me.txtConnectionString.Text = Me.OleDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.OleDbConnection1.Database.String

Microsoft ADO.Net – Step by Step 28

Me.txtTimeOut.Text = Me.OleDbConnection1.ConnectionTimeout

Else

Me.txtConnectionString.Text = Me.SqlDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.SqlDbConnection1.Database.String

Me.txtTimeOut.Text = Me.SqlDbConnection1.ConnectionTimeout

End If

Another option would be to use compiler constants to conditionally compile code. Again,

in Visual Basic:

#Const SqlVersion

#If SqlVersion Then

Me.txtConnectionString.Text = Me.OleDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.OleDbConnection1.Database.String

Me.txtTimeOut.Text = Me.OleDbConnection1.ConnectionTimeout

#Else

Me.txtConnectionString.Text = Me.SqlDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.SqlDbConnection1.Database.String

Me.txtTimeOut.Text = Me.SqlDbConnection1.ConnectionTimeout

#End If

But either option requires a lot of typing, in a lot of places, and can become a

maintenance nightmare. If you only need to access the ConnectionString, Database, and

TimeOut properties (and these are the most common), there’s an easier way.

Connection objects, no matter the Data Provider to which they belong, must implement

the IDbConnection interface, so by declaring a variable as an IDbConnection, we can

use it as an intermediary to access a few of the shared properties.

Create an Intermediary Variable

Visual Basic .NET

1. Declare the variable by adding the following line of code at the

beginning of the class module, under the Connection declarations

we added previously:

Dim myConnection As System.Data.IDbConnection

2. Add procedures to set the value of the myConnection variable when

the user changes their choice in the Connection Type group box. Do

that by using the CheckedChanged event of the two Radio Buttons.

Select the rbOleDB control in the Class Name box of the editor and the

CheckedChanged event in the Method Name box.

Visual Studio adds the CheckedChanged event handler template to the class.

3. Add the following assignment statement to the procedure:

myConnection = Me.OleDbConnection1

4. Repeat steps 2 and 3 for the rbSql radio button, substituting the

SqlDbConnection object:

5. myConnection = Me.SqlDbConnection1

Microsoft ADO.Net – Step by Step 29

Visual C# .NET

1. Declare the variable by adding the following line of code at the

beginning of the class module, under the Connection declaration we

added previously:

private System.Data.IDbConnection myConnection;

2. Add procedures to set the value of the myConnection variable when

the user changes their choice in the Connection Type group box. Do

that by using the CheckedChanged event of the two radio buttons.

Add the following event handlers to the code window below the Dispose

procedure:

private void rbOleDB_CheckChanged(object sender, EventArgs e)

{

myConnection = this.oleDbConnection1;

}

private void rbSQL_CheckChanged (object sender, EventArgs e)

{

myConnection = this.SqlDbConnection1;

}

3. Connect the event handlers to the actual radio button events. Add the

following code to the end of the ConnectionProperties sub:

4. this.rbOleDB.CheckedChanged += new

5. EventHandler(this.rbOleDB_CheckChanged);

6. this.rbSQL.CheckedChanged += new

EventHandler(this.rbSQL_CheckChanged);

Binding Connection Properties to Form Controls

Now that we have the intermediary variable in place, we can add the code to display the

Connection (or rather, the IDbConnection properties) in the control:

Bind Connection Properties to Form Controls

Visual Basic .NET

1. Add the following procedure to the class module:

2. Private Sub RefreshValues()

3. Me.txtConnectionString.Text =

Me.myConnection.ConnectionString

4. Me.txtDatabase.Text = Me.myConnection.Database

5. Me.txtTimeOut.Text = Me.myConnection.ConnectionTimeout

6. End Sub

7. Add a call to the RefreshValues procedure to the end of each of the

CheckedChanged event handlers.

8. Save and run the program by pressing F5. Choose each of the

Connections in turn to confirm that their properties are displayed in

the text boxes.

Microsoft ADO.Net – Step by Step 30

9. Close the application.

Visual C# .NET

1. Add the following procedure to the class module below the

CheckChanged event handlers:

2. private void RefreshValues()

3. {

4. this.txtConectionString.Text =

this.myConnection.ConnectionString;

5. this.txtDatabase.Text = this.myConnection.Database;

6. this.txtTimeOut.Text =

this.myConnection.ConnectionTimeout.ToString();

}

7. Add a call to the RefreshValues procedure to the end of each of the

CheckedChanged event handlers.

8. Save and run the program by pressing F5. Choose each of the

Connections in turn to confirm that their properties are displayed in

the text boxes.

9. Close the application.

Using Dynamic Properties

Another way to handle ConnectionString configurations is to use .NET Framework

dynamic properties. When an application is deployed, dynamic properties are stored in

an external configuration file, allowing them to be easily changed.

Microsoft ADO.Net – Step by Step 31

Connection Methods

Both the SqlConnection and OleDbConnection objects expose the same set of methods,

as shown in Table 2-4.

Table 2-4: Connection Methods

Method Description

BeginTransaction Begins a

database

transaction

ChangeDatabase Changes

the current

database on

an open

Connection

Close Closes the

connection

to the data

source

CreateCommand Creates and

returns a

Data

Command

associated

with the

Connection

Open Establishes

a

connection

to the data

source

Roadmap We’ll examine transaction processing in Chapter 5.

The Connection methods that you will use most often are Open and Close, which do

exactly what you would expect them to—they open and close the connection. The

BeginTransaction method begins transaction processing for a Connection, as we’ll see in

Chapter 5.

Roadmap We’ll examine Data Commands in Chapter 3.

The CreateCommand method can be used to create an ADO.NET Data Command

object. We’ll examine this method in Chapter 3.

Opening and Closing Connections

The Open and Close methods are invoked automatically by the two objects that use a

Connection, the DataAdapter and Data Command. You can also invoke them explicitly in

code, if required.

Roadmap We’ll examine the DataAdapter in Chapter 4.

If the Open method is invoked on a Connection by the DataAdapter or a Data Command,

these objects will leave the Connection in the state in which they found it. If the

Connection was open when a DataAdapter.Fill method is invoked, for example, it will

remain open when the Fill operation is complete. On the other hand, if the Connection is

closed when the Fill method is invoked, the DataAdapter will close it upon completion.

If you invoke the Open method explicitly, the data source connection will remain open

until it is explicitly closed. It will not be closed automatically, even if the Connection

object goes out of scope.

Microsoft ADO.Net – Step by Step 32

Important You must always explicitly invoke a Close method when you

have finished using a Connection object, and for scalability

and performance purposes, you should call Close as soon as

possible after you’ve completed the operations on the

Connection.

Connection Pooling

Although it’s easiest to think of Open and Close methods as discrete operations, in fact

the .NET Framework pools connections to improve performance. The specifics of the

connection pooling are determined by the Data Provider.

The OLE DB Data Provider automatically uses OLE DB connection pooling. You have

no programmatic control over the process. The SQL Server Data Provider uses implicit

pooling by default, based on an exact match in the connection string, but the OLE DB

Data Provider supports some additional keywords in the ConnectionString to control

pooling. See online help for more details.

Open and Close a Connection

Visual Basic .NET

1. Select the btnTest control in the Class Name combo box of the editor

and the Click event in the Method Name combo box.

Visual Studio adds the click event handler template.

2. Add the following lines to the procedure to open the connection,

display its status in a message box, and then close the connection:

3. myConnection.Open()

4. MessageBox.Show(Me.myConnection.State.ToString)

myConnection.Close()

5. Press F5 to save and run the application.

6. Change the Connection Type, and then click the Test button.

The application displays the Connection state.

7. Close the application.

Visual C# .NET

1. Add the following procedure to the code window to open the

connection, display its status in a message box, and then close the

connection:

Microsoft ADO.Net – Step by Step 33

2. private void btnTest_Click(object sender, System.EventArgs e)

3. {

4. this.myConnection.Open();

5. MessageBox.Show(this.myConnection.State.ToString());

6. this.myConnection.Close();

}

7. Add the following code, which connects the event handler to the

btnTest.Click event, to the end of the ConnectionProperties sub:

this.btnTest.Click += new EventHandler(this.btnTest_Click);

8. Press F5 to save and run the application.

9. Change the Connection Type and then click the Test button.

The application displays the Connection state.

10. Close the application.

Handling Connection Events

Both the OLE DB and the SQL Server Connection objects provide two events:

StateChange and InfoMessage.

StateChange Events

Not surprisingly, the StateChange event fires whenever the state of the Connection

object changes. The event passes a StateChangeEventArgs to its handler, which, in

turn, has two properties: OriginalState and CurrentState. The possible values for

OriginalState and CurrentState are shown in Table 2-5.

Table 2-5: Connection States

State Meaning

Broken The

Connecti

on is

open, but

not

functiona

l. It may

be

closed

and reopened

Closed The

Connecti

Microsoft ADO.Net – Step by Step 34

Table 2-5: Connection States

State Meaning

on is

closed

Connecting The

Connecti

on is in

the

process

of

connecti

ng, but

has not

yet been

opened

Executing The

Connecti

on is

executin

g a

comman

d

Fetching The

Connecti

on is

retrieving

data

Open The

Connecti

on is

open

Respond to a StateChange Event

Visual Basic .NET

1. Select OleDbConnection1 in the Class Name combobox of the editor

and the StateChange event in the Method Name combobox.

Visual Studio adds the event declaration to the class.

2. Add the following code to display the previous and current Connection

states:

3. Dim theMessage As String

4. theMessage = “The Connection is changing from ” & _

5. e.OriginalState.ToString & _

6. ” to ” & e.CurrentState.ToString

MessageBox.Show(theMessage)

7. Repeat steps 1 and 2 for SqlDbConnection1.

8. Save and run the program.

9. Click the Test button.

The application displays MessageBoxes as the Connection is opened and

closed.

Microsoft ADO.Net – Step by Step 35

Visual C# .NET

1. Add the following procedure code to display the previous and current

Connection states for each of the two Connection objects:

2. private void oleDbConnection1_StateChange (object sender,

3. StateChangeEventArgs e)

4. {

5. string theMessage;

6. theMessage = “The Connection State is changing from ” +

7. e.OriginalState.ToString() +

8. ” to ” + e.CurrentState.ToString();

9. MessageBox.Show(theMessage);

10. }

11. private void SqlDbConnection1_StateChange (object sender,

12. StateChangeEventArgs e)

13. {

14. string theMessage;

15. theMessage = “The Connection State is changing from ” +

16. e.OriginalState.ToString() +

17. ” to ” + e.CurrentState.ToString();

18. MessageBox.Show(theMessage);

}

19. Add the code to connect the event handlers to the

ConnectionProperties sub:

20. this.oleDbConnection1.StateChange += new

21.

System.Data.StateChangeEventHandler(this.oleDbConnection1

_StateChange);

22. this.SqlDbConnection1.StateChange += new

System.Data.StateChangeEventHandler(this.SqlDbConnection1_StateCha

nge);

23. Save and run the program.

24. Change the Connection Type and then click the Test button.

The application displays two MessageBoxes as the Connection is opened and

closed.

InfoMessage Events

The InfoMessage event is triggered when the data source returns warnings. The

information passed to the event handler depends on the Data Provider.

Chapter 2 Quick Reference

To Do this

Create a Server Explorer Connection Click the Connect to Database

button in the Server Explorer,

or choose Connect to

Database on the Tools menu

Add an instance of a Server Explorer

Connection to a form

Drag the Connection from the

Server Explorer to the form

Create a Connection using code Use the New constructor. For

example:

Dim myConn as New

OleDbConnection()

Use an intermediary variable to reference Declare the variable as an

Microsoft ADO.Net – Step by Step 36

To Do this

multiple types of IDbConnection. For example:

Dim myConn As

System.Data.IDbConnect

ion Connections

Open a Connection Use the Open method. For

example: myConn.Open

Close a Connection Use the Close method. For

example: myConn.Close

Chapter 3: Data Commands and the DataReader

Overview

In this chapter, you’ll learn how to:

§ Add a Data Command to a form

§ Create a Data Command at run time

§ Set Command properties at run time

§ Configure the Parameters collection in Microsoft Visual Studio .NET

§ Add and configure Parameters at run time

§ Set Parameter values

§ Execute a Command

§ Create a DataReader to return Command results

The Connection object that we examined in Chapter 2 represents the physical

connection to a data source; the conduit for exchanging information between an

application and the data source. The mechanism for this exchange is the Data

Command.

Understanding Data Commands and DataReaders

Essentially, an ADO.NET data command is simply a SQL command or a reference to a

stored procedure that is executed against a Connection object. In addition to retrieving

and updating data, the Data Command can be used to execute certain types of queries

on the data source that do not return a result set and to execute data definition (DDL)

commands that change the structure of the data source.

When a Data Command does return a result set, a DataReader is used to retrieve the

data. The DataReader object returns a read-only, forward-only stream of data from a

Data Command. Because only a single row of data is in memory at a time (unlike a

DataSet, which, as we’ll see in Chapter 6, stores the entire result set), a DataReader

requires very little overhead. The Read method of the DataReader is used to retrieve a

row, and the GetType methods (where Type is a system data type, such as GetString to

return a data string) return each column within the current row.

As part of the Data Provider, Data Commands and DataReaders are specific to a data

source. Each of the .NET Framework Data Providers implements a Command and a

DataReader object: OleDbCommand and OleDbDataReader in the System.Data.OleDb

namespace; and SqlCommand and SqlDataReader in the System.Data.SqlClient

namespace.

Creating Data Commands

Like most of the objects that can exist at the form level, Data Commands can either be

created and configured at design time in Visual Studio or at run time in code.

DataReaders can be created only at run time, using the ExecuteReader method of the

Data Command, as we’ll see later in this chapter.

Microsoft ADO.Net – Step by Step 37

Creating Data Commands in Visual Studio

A Command object is created in Visual Studio just like any other control—simply drag

the control off of the Data tab of the Toolbox and drop it on the form. Since the Data

Command has no user interface, like most of the objects we’ve covered, Visual Studio

will add the control to the Component Designer.

Add a Data Command to a Form at Design Time

In this exercise we’ll create and name a Data Command. We’ll configure its properties in

later lessons.

1. Open the DataCommands project from the Visual Studio start page or

from the Project menu.

2. Double-click DataCommands.vb (or DataCommands.cs, if you’re using

C#) in the Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

3. Drag a SqlCommand control from the Data tab of the Toolbox to the

form.

Visual Studio adds the command to the form.

4. In the Properties window, change the name of the Command to

cmdGetEmployees.

Creating Data Commands at Run Time

Roadmap We’ll discuss the version of the Command constructor that

supports transactions in Chapter 5.

Microsoft ADO.Net – Step by Step 38

The Data Command supports four versions of its constructor, as shown in Table 3-1. The

New() version sets all the properties to their default values, while the other versions allow

you to set properties of the Command object during creation. Whichever version you

choose, of course, you can set or change property values after the Command is created.

Table 3-1: Command Constructors

Property Description

New() Creates a

new, default

instance of

the Data

Command

New(Command) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command

New(Command, Connection) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command

and the

Connection

property set

to the

SqlConnecti

on specified

in

Connection

New(Command, Connection, Transaction) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command,

the

Connection

property set

to the

Connection

specified in

Connection,

and the

Transaction

property set

to the

Microsoft ADO.Net – Step by Step 39

Table 3-1: Command Constructors

Property Description

Transaction

specified in

Transaction

Create a Command Object at Run Time

Once again, we will create the Command object in this exercise and set its properties

later in the chapter.

Visual Basic .NET

1. Press F7 to display the code editor window.

2. Add the following line after the Inherits statement:

Friend WithEvents cmdGetCustomers As

System.Data.SqlClient.SqlCommand

This line declares the command variable. (One variable, cmdGetOrders, has

already been declared in the exercise project.)

3. Expand the region labeled ‘Windows Form Designer generated code’.

4. Add the following line to end of the New Sub:

Me.cmdGetCustomers = New System.Data.SqlClient.SqlCommand()

This command instantiates the Command object using the default constructor.

(cmdGetOrders has already been instantiated.)

Visual C# .NET

1. Press F7 to display the code editor window.

2. Add the following line after the opening bracket of the class

declaration:

internal System.Data.SqlClient.SqlCommand cmdGetCustomers;

This line declares the command variable.

3. Scroll down to the frmDataCmds Sub.

4. Add the following line to the procedure after the InitializeComponent

call:

this.cmdGetCustomers = new System.Data.SqlClient.SqlCommand();

This command instantiates the Command object using the default constructor.

(cmdGetOrders has already been declared and instantiated.)

Command Properties

The properties exposed by the Data Command object are shown in Table 3-2. These

properties will only be checked for syntax errors when they are set. Final validation

occurs only when the Command is executed by a data source.

Table 3-2: Data Command Properties

Property Description

CommandText The SQL

statement or

stored

procedure to

execute

CommandTimeout The time (in

seconds) to

Microsoft ADO.Net – Step by Step 40

Table 3-2: Data Command Properties

Property Description

wait for a

response

from the

data source

CommandType Indicates

how the

CommandT

ext property

is to be

interpreted,

defaults to

Text

Connection The

Connection

object on

which the

Data

Command is

to be

executed

Parameters The

Parameters

Collection

Transaction The

Transaction

in which the

command

will execute

UpdatedRowSource Determines

how results

are applied

to a

DataRow

when the

Command is

used by the

Update

method of a

DataAdapter

The CommandText property, which is a string, contains either the actual text of the

command to be executed against the connection or the name of a stored procedure in

the data source.

The CommandTimeout property determines the time that the Command will wait for a

response from the server before it generates an error. Note that this is the wait time

before the Command begins receiving results, not the time it takes the command to

execute. The data source might take ten or fifteen minutes to return all the rows of a

huge table, but provided the first row is received within the specified CommandTimeout

period, no error will be generated.

The CommandType property tells the command object how to interpret the contents of

the CommandText property. The possible values are shown in Table 3-3. TableDirect is

only supported by the OleDbCommand, not the SqlCommand, and is equivalent to

SELECT * FROM <tablename>, where the <tablename> is specified in the

CommandText property.

Microsoft ADO.Net – Step by Step 41

Table 3-3: CommandType Values

Property Description

StoredProcedure The name of

a stored

procedure

TableDirect A table

name

Text A SQL text

command

The Connection property contains a reference to the Connection object on which the

Command will be executed. The Connection object must belong to the same namespace

as the Command object, that is, a SqlCommand must contain a reference to a

SqlConnection and an OleDbCommand must contain a reference to an

OleDbConnection.

The Command object’s Parameters property contains a collection of Parameters for the

SQL command or stored procedure specified in CommandText. We’ll examine this

collection in detail later in this exercise.

Roadmap We’ll examine the Transaction property in Chapter 5.

The Transaction property contains a reference to a Transaction object and serves to

enroll the Command in that transaction. We’ll examine this property in detail in Chapter

5.

Roadmap We’ll examine the DataAdapter in Chapter 4 and the

DataRow in Chapter 7.

The UpdatedRowSource property determines how results are applied to a DataRow

when the Command is executed by the Update method of the DataAdapter. The possible

values for the UpdatedRowSource property are shown in Table 3-4.

Table 3-4: UpdatedRowSource Values

Property Description

Both Both the

output

parameters

and the first

row returned

by the

Command

will be

mapped to

the changed

row

FirstReturnedRecord The first row

returned by

the

Command

will be

mapped to

the changed

row

None Any

returned

parameters

or rows are

Microsoft ADO.Net – Step by Step 42

Table 3-4: UpdatedRowSource Values

Property Description

discarded

OutputParameters Output

parameters

of the

Command

will be

mapped to

the changed

row

If the Data Command is generated automatically by Visual Studio, the default value of

the UpdatedRowSource property is None. If the Command is generated at run time or

created by the user at design time, the default value is Both.

Setting Command Properties at Design Time

As might be expected, the properties of a Command control created in Visual Studio are

set using the Properties window. In specifying the CommandText property, you can

either type the value directly or use the Query Builder to generate the required SQL

statement. You must specify the Connection property before you can set the

CommandText property.

Set Command Properties in Visual Studio

1. In the form designer, select cmdGetEmployees in the Component

Designer.

2. In the Properties window, select the Connection property, expand the

Existing node in the drop-down list, and then click cnNorthwind.

3. Select the CommandText property, and then click the ellipsis button.

Visual Studio displays the Query Builder’s Add Table dialog box.

Microsoft ADO.Net – Step by Step 43

4. Click the Views tab in the Add Table dialog box, and then click

EmployeeList.

5. Click Add, and then click Close.

Visual Studio adds EmployeeList to the Query Builder.

6. Select the check box next to (All Columns) in the Diagram pane of the

Query Builder to select all columns.

Visual Studio updates the SQL text in the SQL pane.

Microsoft ADO.Net – Step by Step 44

7. Click OK.

Visual Studio generates the SQL command and sets the CommandText

property in the Properties window.

Setting Command Properties at Run Time

The majority of the properties of the Command object are set by using simple

assignment statements. The exception is the Parameters collection, which because it is

a collection, uses the Add method.

Set Command Properties at Run Time

Visual Basic .NET

1. In the Code window, add the following lines below the variable

instantiations of the New Sub:

2. Me.cmdGetCustomers.CommandText = “SELECT * FROM

CustomerList”

3. Me.cmdGetCustomers.CommandType = CommandType.Text

Me.cmdGetCustomers.Connection = Me.cnNorthwind

4. The first line specifies the command to be executed on the

Connection—it simply returns all rows from the CustomerList view.

The second line specifies that the CommandText property is to be

treated as a SQL command, and the third line sets the Connection

on which the command is to be executed.

Visual C# .NET

1. In the Code window, add the following lines below the variable

instantiation:

2. this.cmdGetCustomers.CommandText = “SELECT * FROM

CustomerList”;

3. this.cmdGetCustomers.CommandType = CommandType.Text;

this.cmdGetCustomers.Connection = this.cnNorthwind;

4. The first line specifies the command to be executed on the

Connection—it simply returns all rows from the CustomerList view.

The second line specifies that the CommandText property is to be

treated as a SQL command, and the third line sets the Connection

on which the command is to be executed.

Microsoft ADO.Net – Step by Step 45

Using the Parameters Collection

There are three steps to using parameters in queries and stored procedures—you must

specify the parameters in the query or stored procedure, you must specify the

parameters in the Parameters collection, and finally you must set the parameter values.

If you’re using a stored procedure, the syntax for specifying parameters will be

determined by the data source when the stored procedure is created. If you are using

parameters in a SQL command specified in the CommandText property of the Command

object, the syntax requirement is determined by the .NET Data Provider.

Unfortunately, the two Data Providers supplied in the .NET Framework use different

syntax. OleDbCommand objects use a question mark (?) as a placeholder for a

parameter:

SELECT * FROM Customers WHERE CustomerID = ?

SqlDbCommand objects use named parameters, prefixed with the @ character:

SELECT * FROM Customers WHERE CustomerID = @custID

Having created the stored procedure or SQL command, you must then add each of the

parameters to the Parameters collection of the Command object. Again, if you are using

Visual Studio, it will configure the collection for you, but if you are creating or reconfiguring

the Command object at run time, you must use the Add method of the

Parameters collection to create a Parameter object for each parameter in the query or

stored procedure.

The Parameters collection provides a number of methods for configuring the collection at

run time. The most useful of these are shown in Table 3-5. Note that because the

OleDbCommand doesn’t support named parameters, the parameters will be substituted

in the order they are found in the Parameters collection. Because of this, it is important

that you configure the items in the collection correctly. (This can be a very difficult bug to

track, and yes, that is the voice of experience.)

Table 3-5: Parameters Collection Methods

Property Description

Add(Value) Adds a new

parameter

at the end of

the

collection

with the

specified

Value

Add(Parameter) Adds a

Parameter

to the end of

the

collection

Add(Name, Value) Adds a

Parameter

with the

name

specified in

the Name

string and

the specified

Value to the

end of the

collection

Microsoft ADO.Net – Step by Step 46

Table 3-5: Parameters Collection Methods

Property Description

Add(Name, Type) Adds a

Parameter

of the

specified

Type with

the name

specified in

the Name

string to the

end of the

collection

Add(Name, Type, Size) Adds a

Parameter

of the

specified

Type and

Size with

the name

specified in

the Name

string to the

end of the

collection

Add(Name, Type, Size, SourceColumn) Adds a

Parameter

of the

specified

Type and

Size with

the name

specified in

the Name

string to the

end of the

collection,

and maps it

to the

DataTable

column

specified in

the

SourceColu

mn string

Clear Removes all

Parameters

from the

collection

Insert(Index, Value) Inserts a

new

Parameter

with the

Value

specified at

the position

specified by

Microsoft ADO.Net – Step by Step 47

Table 3-5: Parameters Collection Methods

Property Description

the zerobased

Index

into the

collection

Remove(Value) Removes

the

parameter

with the

specified

Value from

the

collection

RemoveAt(Index) Removes

the

parameter

at the

position

specified by

the zerobased

Index

into the

collection

RemoveAt(Name) Removes

the

parameter

with the

name

specified by

the Name

string from

the

collection

Configure the Parameters Collection in Visual Studio

1. In the form designer, drag a SqlCommand object onto the form.

Visual Studio adds a new command to the Component Designer.

2. In the Properties window, change the new Command’s name to

cmdOrderCount.

3. In the Properties window, expand the Existing node in the Connection

property’s drop-down list, and then click cnNorthwind.

4. Select the CommandText property, and then click the ellipsis button.

Visual Studio opens the Query Builder and the Add Table dialog box.

5. Click the Views tab in the Add Table dialog box, and then click

OrderTotals.

6. Click Add, and then click Close.

Visual Studio adds OrderTotals to the Query Builder.

7. Change the SQL statement in the SQL pane to read as follows:

8. SELECT Count(*) AS OrderCount

9. FROM OrderTotals

WHERE (EmployeeID = @empID) AND (CustomerID = @custID)

Microsoft ADO.Net – Step by Step 48

10. Verify that the Regenerate parameters collection for this command

check box is selected, and then click OK.

Visual Studio displays a warning message.

11. Click Yes.

Visual Studio generates the CommandText property and the Parameters

collection.

12. In the Properties window, select the Parameters property, and then

click the ellipsis button.

Visual Studio displays the SqlParameter Collection Editor. Because the Query

Builder generated the parameters for us, there is nothing to do here.

However, you could add, change, or remove parameters as necessary.

13. Click OK.

Microsoft ADO.Net – Step by Step 49

Add and Configure Parameters at Run Time

Visual Basic .NET

1. Press F7 to display the code editor.

2. Add the following lines to the end of the New Sub:

3. Me.cmdGetOrders.Parameters.Add(“@custID”,

SqlDbType.VarChar)

Me.cmdGetOrders.Parameters.Add(“@empID”, SqlDbType.Int)

Visual C# .NET

1. Press F7 to display the code editor.

2. Add the following lines after the property instantiations:

3. this.cmdGetOrders.Parameters.Add(“@custID”,

SqlDbType.VarChar);

4. this.cmdGetOrders.Parameters.Add(“@empID”, SqlDbType.Int);

Set Parameter Values

After you have established the Parameters collection and before you execute the

command, you must set the values for each of the Parameters. This can be done only at

run time with a simple assignment statement.

Visual Basic .NET

1. In the Code Editor window, select btnOrderCount in the Object Name

list, and Click in the Method Name box.

Visual Studio adds the click event handler for the button.

2. Add the following code to the event handler:

3. Dim cnt As Integer

4. Dim strMsg As String

5.

6. Me.cmdOrderCount.Parameters(“@empID”).Value = _

7. Me.lbEmployees.SelectedItem(“EmployeeID”)

8. Me.cmdOrderCount.Parameters(“@custID”).Value = _

Me.lbClients.SelectedItem(“CustomerID”)

The code first declares a couple of variables that will be used in the next

exercise, and then sets the value of each of the parameters in the

cmdOrderCount.Parameters collection to the value of the Employees and

Clients list boxes, respectively.

Visual C# .NET

1. Add the following event handler to the code below the existing

btnGetOrders_Click procedure:

2. private void btnOrderCount_Click(object sender,

3. System.EventArgs e)

4. {

5. int cnt;

6. string strMsg;

7. System.Data.DataRowView drv;

8.

9. drv = (System.Data.DataRowView)

10. this.lbEmployees.SelectedItem;

11. this.cmdOrderCount.Parameters[“@empID”].Value =

12. drv[“EmployeeID”];

13. drv = (System.Data.DataRowView)

Microsoft ADO.Net – Step by Step 50

14. this.lbClients.SelectedItem;

15. this.cmdOrderCount.Parameters[“@custID”].Value =

16. drv[“CustomerID”];

17. }

The code first declares a couple of variables that will be used in the next

exercise, and then sets the value of each of the parameters in the

cmdOrderCount.Parameters collection to the value of the Employees and

Clients list boxes, respectively.

18. Connect the event handler to the click event by adding the following

line to the end of the frmDataCmds sub:

19. this.btnOrderCount.Click += new

EventHandler(this.btnOrderCount_Click);

Command Methods

The methods exposed by the Command object are shown in Table 3-6. Of these, the

most important are the four Execute methods: ExecuteNonQuery, ExecuteReader,

ExecuteScalar, and ExecuteXmlReader.

ExecuteNonQuery is used when the SQL command or stored procedure to be executed

returns no rows. An Update query, for example, would use the ExecuteNonQuery

method.

ExecuteScalar is used for SQL commands and stored procedures that return a single

value. The most common example of this sort of command is one that returns a count of

rows:

SELECT Count(*) from OrderTotals

Table 3-6: Command Methods

Method Description

Cancel Cancels

execution of a

Data

Command

CreateParameter Creates a new

parameter

ExecuteNonQuery Executes a

command

against the

Connection

and returns the

number of

rows affected

ExecuteReader Sends the

CommandText

to the

Connection

and builds a

DataReader

ExecuteScalar Executes the

query and

returns the first

column of the

first row of the

result set

ExecuteXmlReader Sends the

Microsoft ADO.Net – Step by Step 51

Table 3-6: Command Methods

Method Description

CommandText

to the

Connection

and builds an

XMLReader

Prepare Creates a

prepared

(compiled)

version of the

command on

the data

source

ResetCommandTimeout Resets the

CommandTim

eout property

to its default

value

The ExecuteReader method is used for SQL Commands and stored procedures that

return multiple rows. The method creates a DataReader object. We’ll discuss

DataReaders in detail in the next section.

The ExecuteReader method may be executed with no parameters, or you can supply a

CommandBehavior value that allows you to control precisely how the Command will

perform. The values for CommandBehavior are shown in Table 3-7.

Table 3-7: CommandBehavior Values

Property Description

CloseConnection Closes the

associated

Connection

when the

DataReader

is closed

KeyInfo Indicates

that the

query

returns

column and

primary key

information

SchemaOnly Returns the

database

schema

only, without

affecting

any rows in

the data

source

SequentialAccess The results

of each

column of

each row

will be

Microsoft ADO.Net – Step by Step 52

Table 3-7: CommandBehavior Values

Property Description

accessed

sequentially

SingleResult Returns only

a single

value

SingleRow Returns only

a single row

Most of the CommandBehavior values are self-explanatory. Both KeyInfo and

SchemaOnly are useful if you cannot determine the structure of the command’s result

set prior to run time.

The SequentialAccess behavior allows the application to read large binary column

values using the GetBytes or GetChars methods of the DataReader, while the

SingleResult and SingleRow behaviors can be optimized by the Data Provider.

Execute a Command

Visual Basic .NET

§ Add the following code to the btnOrderCount_Click event handler that

we began in the last exercise:

§ Me.cnNorthwind.Open()

§ cnt = Me.cmdOrderCount.ExecuteScalar

§ Me.cnNorthwind.Close()

§

§ strMsg = “There are ” & cnt.ToString & ” Orders for this ”

§ strMsg &= “Employee/Customer combination.”

§ MessageBox.Show(strMsg)

The first three lines of code open the cnNorthwind Connection, call the ExecuteScalar

method to return a single value from the cmdOrderCount Command, and then close

the Connection. The last three lines simply display the results in a message box.

Visual C# .NET

§ Add the following code to the btnOrderCount_Click event handler that

we began in the last exercise:

§ this.cnNorthwind.Open();

§ cnt = (Int) this.cmdOrderCount.ExecuteScalar();

§ this.cnNorthwind.Close();

§

§ strMsg = “There are ” + cnt.ToString() + ” Orders for this “;

§ strMsg += “Employee/Customer combination.”;

MessageBox.Show(strMsg);

The first three lines of code open the cnNorthwind Connection, call the ExecuteScalar

method to return a single value from the cmdOrderCount Command, and then close

the Connection. The last three lines simply display the results in a message box.

DataReaders

The DataReader’s properties are shown in Table 3-8. The Item property supports two

versions: Item(Name), which takes a string specifying the name of the column as a

parameter, and Item(Index), which takes an Int32 as an index into the columns

collection. (As with all collections in the .NET Framework, the collection index is zerobased.)

Table 3-8: DataReader Properties

Property Description

Microsoft ADO.Net – Step by Step 53

Table 3-8: DataReader Properties

Property Description

Depth The depth of

nesting for

the current

row in

hierarchical

result sets.

SQL Server

always

returns zero.

FieldCount The number

of columns

in the

current row.

IsClosed Indicates

whether the

DataReader

is closed.

Item The value of

a column.

RecordsAffected The number

of rows

changed,

inserted, or

deleted.

The methods exposed by the DataReader are shown in Table 3-9. The Close method, as

we’ve seen, closes the DataReader and, if the CloseConnection behavior has been

specified, closes the Connection as well. The GetDataTypeName, GetFieldType,

GetName, GetOrdinal and IsDbNull methods allow you to determine, at run time, the

properties of a specified column.

Note that IsDbNull is the only way to check for a null value, since the .NET Framework

doesn’t have an intrinsic Null data type.

Table 3-9: DataReader Methods

Method Description

Close Closes the

DataReader

GetType Gets the

value of the

specified

column as

the specified

type

GetDataTypeName Gets the

name of the

data source

type

GetFieldType Returns the

system type

of the

specified

Microsoft ADO.Net – Step by Step 54

Table 3-9: DataReader Methods

Method Description

column

GetName Gets the

name of the

specified

column

GetOrdinal Gets the

ordinal

position of

the column

specified

GetSchemaTable Returns a

DataTable

that

describes

the structure

of the

DataReader

GetValue Gets the

value of the

specified

column as

its native

type

GetValues Gets all the

columns in

the current

row

IsDbNull Indicates

whether the

column

contains a

nonexistent

value

NextResult Advances

the

DataReader

to the next

result

Read Advances

the

DataReader

to the next

row

The Read method retrieves the next row of the result set. When the DataReader is first

opened, it is positioned at the beginning of file, before the first row, not at the first row.

You must call Read before the first row of the result set will be returned.

The NextResult method is used when a SQL command or stored procedure returns

multiple result sets. It positions the DataReader at the beginning of the next result set.

Again, the DataReader will be positioned before the first row, and you must call Read

before accessing any results.

Microsoft ADO.Net – Step by Step 55

The GetValues method returns all of the columns in the current row as an object array,

while the GetValue method returns a single value as one of the .NET Framework types.

However, if you know the data type of the value to be returned in advance, it is more

efficient to use one of the GetType methods shown in Table 3-10.

Note The SqlDataReader object supports additional GetType methods

for values of System.Data.SqlType. They are detailed in online

help.

Table 3-10: GetType Methods

Method Name Method

Name

Method

Name

GetBoolean GetFloat GetInt16

GetByte GetGuid GetInt32

GetBytes GetDateTime GetInt64

GetChar GetDecimal GetString

GetChars GetDouble GetTimeSpan

Create a DataReader to Return Command Results

Visual Basic .NET

1. In the code editor window, select btnFillLists in the Object Name list,

and Click in the Method Name box.

Visual Studio adds the click event handler to the code.

2. Add the following variable declarations to the event handler:

3. Dim dr As System.Data.DataRow

4. Dim rdrEmployees As System.Data.SqlClient.SqlDataReader

Dim rdrCustomers As System.Data.SqlClient.SqlDataReader

5. Add the following code to fill the EmployeeList table:

6. Me.cnNorthwind.Open()

7. rdrEmployees = Me.cmdGetEmployees.ExecuteReader()

8.

9. With rdrEmployees

10. While .Read

11. dr = Me.dsMaster1.EmployeeList.NewRow

12. dr(0) = .GetInt32(0)

13. dr(1) = .GetString(1)

14. dr(2) = .GetString(2)

15. Me.dsMaster1.EmployeeList.Rows.Add(dr)

16. End While

17. End With

18. rdrEmployees.Close()

19. Me.cnNorthwind.Close()

Roadmap We’ll examine the DataSet in Chapter 6.

20. The code first opens the Connection, and then creates the

DataReader with the ExecuteReader method. The While .Read loop

first creates a new DataRow, retrieves each column from the

DataRow and assigns its value to a column, and then adds the new

row to the EmployeeList table. Finally, the DataReader and the

Connection are closed.

21. Add the final code to the procedure:

Microsoft ADO.Net – Step by Step 56

22. Me.cnNorthwind.Open()

23. rdrCustomers = Me.cmdGetCustomers.ExecuteReader()

24. With rdrCustomers

25. While .Read

26. dr = Me.dsMaster1.CustomerList.NewRow

27. dr(0) = .GetString(0)

28. dr(1) = .GetString(1)

29. Me.dsMaster1.CustomerList.Rows.Add(dr)

30. End While

31. End With

32. rdrCustomers.Close()

Me.cnNorthwind.Close()

This code is almost identical to the previous section, except that it uses the

cmdGetCustomers command to fill the CustomerList table. Note that the

Connection is closed and re-opened between calls to the ExecuteReader

method. This is necessary because the Connection will return a status of

Busy until either it or the DataReader are explicitly closed.

33. Press F5 to run the application.

34. Click Fill Lists.

35. Select different combinations of Employee and Customer, and then

click Order Count, and, if you like, click Get Orders.

Microsoft ADO.Net – Step by Step 57

The Get Orders button click event handler, which is provided for you, also

calls the ExecuteReader method, but this time against the cmdGetOrders

object.

Visual C# .NET

1. Create the following event handler in the code editor window:

2. private void btnFillLists_Click(object sender, System.EventArgs e)

3. {

4. System.Data.DataRow dr;

5. System.Data.SqlClient.SqlDataReader rdrEmployees;

6. System.Data.SqlClient.SqlDataReader rdrCustomers;

}

7. Add the following code to fill the EmployeeList table:

8. this.cnNorthwind.Open();

9. rdrEmployees = this.cmdGetEmployees.ExecuteReader();

10.

11. while (rdrEmployees.Read())

12. {

13. dr = this.dsMaster1.EmployeeList.NewRow();

14. dr[0] = rdrEmployees.GetInt32(0);

15. dr[1] = rdrEmployees.GetString(1);

16. dr[2] = rdrEmployees.GetString(2);

17. this.dsMaster1.EmployeeList.Rows.Add(dr);

18. }

19.

20. rdrEmployees.Close();

this.cnNorthwind.Close();

Roadmap We’ll examine the DataSet in Chapter 6.

The code first opens the Connection, and then creates the DataReader with

the ExecuteReader method. The while (rdrEmployees.Read()) loop first

creates a new DataRow, retrieves each column from the DataRow and

assigns its value to a column, and then adds the new row to the EmployeeList

table. Finally, the DataReader and the Connection are closed.

21. Add the final code to the procedure:

22. this.cnNorthwind.Open();

Microsoft ADO.Net – Step by Step 58

23. rdrCustomers = this.cmdGetCustomers.ExecuteReader();

24.

25. while (rdrCustomers.Read())

26. {

27. dr = this.dsMaster1.CustomerList.NewRow();

28. dr[0] = rdrCustomers.GetString(0);

29. dr[1] = rdrCustomers.GetString(1);

30. this.dsMaster1.CustomerList.Rows.Add(dr);

31. }

32.

33. rdrCustomers.Close();

34. this.cnNorthwind.Close();

This code is almost identical to the previous section, except that it uses the

cmdGetCustomers command to fill the CustomerList table. Note that the

Connection is closed and re-opened between calls to the ExecuteReader

method. This is necessary because the Connection will return a status of

Busy until either it or the DataReader are explicitly closed.

35. Link the event handler to the event by adding the following line to

the frmDataCmds sub:

36. this.btnFillLists.Click += New

EventHandler(this.btnFillLists_Click);

37. Press F5 to run the application.

38. Click Fill Lists.

Microsoft ADO.Net – Step by Step 59

39. Select different combinations of Employee and Customer, and then

click Order Count, and, if you like, click Get Orders.

The Get Orders button click event handler, which is provided for you, also

calls the ExecuteReader method, but this time against the cmdGetOrders

object.

Chapter 3 Quick Reference

To Do this

Add a Data Command to a form Drag an OleDbCommand or

SqlCommand from the Data tab of

the Toolbox to the form.

Create a Data Command at run time Use one of the New constructors.

For example: Dim myCmd as New

System.Data.SqlClient.SqlComma

nd()

Configure the Parameters collection in

Visual Studio

Click the ellipsis button in the

Parameters property of the

Property window.

Add and configure Parameters at run

time

Use one of the Add methods of the

Parameters collection. For

example:

Microsoft ADO.Net – Step by Step 60

To Do this

mySqlCmd.Parameters.Add

(“@myParam”, SqlDbType.Type)

Execute a Command that doesn’t return

a result

Use the ExecuteNonQuery

method. For example: intResults =

myCmd.ExecuteNonQuery()

Execute a Command that returns a

single value

Use the ExecuteScalar method.

For example: myResult =

myCmd.ExecuteScalar()

Create a DataReader to return

Command results

Use the ExecuteReader method.

For example: myReader =

myCmd.ExecuteReader()

Chapter 4: The DataAdapter

Overview

In this chapter, you’ll learn how to:

§ Create a DataAdapter

§ Preview the results of a DataAdapter

§ Set a DataAdapter’s properties

§ Use the Table Mappings dialog box

§ Use the DataAdapter’s methods

§ Respond to DataAdapter events

In this chapter, we’ll examine the DataAdapter, which sits between the Connection object

we looked at in the previous chapter and the DataSet, which we’ll examine in Chapter 5.

Understanding the DataAdapter

Like the Connection and Command objects, the DataAdapter is part of the Data

Provider, and there is a version of the DataAdapter specific to each Data Provider. In the

release version of the .NET Framework, this means the OleDbDataAdapter in the

System.Data.OleDb namespace and the SqlDataAdapter in the System.Data.SqlClient

namespace. Both of these objects inherit from the System.Data.DbDataAdapter, which in

turn inherits from the System.Data.DataAdapter.

DataAdapters act as the ‘glue’ between a data source and the DataSet object. In very

abstract terms, the DataAdapter receives the data from the Connection object and

passes it to the DataSet. It then passes changes back from the DataSet to the

Connection to update the data in the data source. (Remember that the data source can

be any kind of data, not just a database.)

Tip Typically, there is a one-to-one relationship between a DataAdapter

and a DataTable within a DataSet, but a SelectCommand that

returns multiple result sets may link to multiple tables in the

DataSet.

To perform updates on the data source, DataAdapters contain references to four Data

Commands, one for each possible action: SelectCommand, UpdateCommand,

InsertCommand, and DeleteCommand.

Note With the exception of some minor differences in the Fill method,

which we’ll look at later, the SqlDataAdapter and

OleDbDataAdapter have identical properties, methods, and

events. For the sake of simplicity, we’ll only use the

SqlDataAdapter in this chapter, but all of the code samples will

Microsoft ADO.Net – Step by Step 61

work equally well with OleDb if you change the class names of the

objects.

Creating DataAdapters

Microsoft Visual Studio .NET provides several different methods for creating

DataAdapters interactively. We saw one in Chapter 1, when we used the Data Adapter

Configuration Wizard, and we’ll explore a couple more in this section. Of course, if you

need to, you can create a DataAdapter manually in code, and we’ll look at that in this

section, as well.

Using the Server Explorer

If you have created a design time connection to a data source in the Server Explorer,

you can automatically create a DataAdapter by dragging the appropriate table, query, or

stored procedure onto your form. If you don’t already have a connection on the form,

Visual Studio will create a preconfigured connection as well.

Create a DataAdapter from the Server Explorer

1. Open the DataAdapters project from the Visual Studio start page or by

using the Open menu.

2. In the Solution Explorer, double-click DataAdapters.vb (or

DataAdapters.cs, if you’re using C#) to open the form.

Visual Studio displays the form in the form designer.

3. In the Server Explorer, expand the SQL Northwind connection (the

name of the Connection will depend on your system configuration),

and then expand its Tables collection.

Microsoft ADO.Net – Step by Step 62

4. Drag the Categories table onto the form.

Visual Studio adds an instance of the SqlDataAdapter and becaus e it didn’t

already exist, an instance of the SqlConnection to the component designer.

5. Select the SqlDataAdapter1 on the form, and then in the Properties

window, change its name to daCategories.

Using the Toolbox

As we saw in Chapter 1, if you drag a DataAdapter from the Toolbox (either an

SqlDataAdapter or an OleDbDataAdapter), Visual Studio will start the Data Adapter

Configuration Wizard. If you want to configure the DataAdapter manually, you can simply

Microsoft ADO.Net – Step by Step 63

cancel the wizard and set the DataAdapter’s properties using code or the Properties

window.

Create a DataAdapter Using the Toolbox

In this exercise, we’ll only create the DataAdapter. We’ll set its properties later in the

chapter.

1. In the Toolbox, drag a SqlDataAdapter from the Data tab onto the

form.

Visual Studio starts the Data Adapter Configuration Wizard.

2. Click Cancel.

Visual Studio creates an instance of the SqlDataAdapter in the component

designer.

3. In the Properties window, change the name of the DataAdapter to

daProducts.

Creating DataAdapters at Run Time

When we created ADO.NET objects in code in previous chapters, we first declared them

and then initialized them. The process is essentially the same to create a DataAdapter,

but it has a little twist—because a DataAdapter references four command objects, you

must also declare and instantiate each of the commands, and then set the DataAdapter

to reference them.

Create a DataAdapter in Code

Visual Basic .NET

1. Press F7 to display the code for the DataAdapters form.

2. Type the following statements after the Inherits statement:

3. Friend WithEvents cmdSelectSuppliers As New _

4. System.Data.SqlClient.SqlCommand()

5. Friend WithEvents cmdInsertSuppliers As New _

6. System.Data.SqlClient.SqlCommand()

7. Friend WithEvents cmdUpdateSuppliers As New _

8. System.Data.SqlClient.SqlCommand()

9. Friend WithEvents cmdDeleteSuppliers As New _

10. System.Data.SqlClient.SqlCommand()

11. Friend WithEvents daSuppliers As New _

System.Data.SqlClient.SqlDataAdapter()

These lines declare the four command objects and the DataAdapter, and

initialize each object with its default constructor.

12. Open the region labeled “Windows Form Designer generated code”

and add the following lines to the New Sub after the call to

InitializeComponent:

13. Me.daSuppliers.DeleteCommand = Me.cmdDeleteSuppliers

14. Me.daSuppliers.InsertCommand = Me.cmdInsertSuppliers

15. Me.daSuppliers.SelectCommand = Me.cmdSelectSuppliers

Me.daSuppliers.UpdateCommand = Me.cmdUpdateSuppliers

These lines assign the four Command objects to the daSuppliers

DataAdapter.

Microsoft ADO.Net – Step by Step 64

Visual C# .NET

1. Press F7 to display the code for the DataAdapters form.

2. Type the following statements at the beginning of the class definition:

3. private System.Data.SqlClient.SqlCommand cmdSelectSuppliers;

4. private System.Data.SqlClient.SqlCommand cmdInsertSuppliers;

5. private System.Data.SqlClient.SqlCommand

cmdUpdateSuppliers;

6. private System.Data.SqlClient.SqlCommand cmdDeleteSuppliers;

7. private System.Data.SqlClient.SqlDataAdapter daSuppliers;

These lines declare the four Command objects and the DataAdapter.

8. Scroll down to the DataAdapters function and add the following lines

after the call to InitializeComponent:

9. this.cmdDeleteSuppliers = new

10. System.Data.SqlClient.SqlCommand();

11. this.cmdInsertSuppliers = new

12. System.Data.SqlClient.SqlCommand();

13. this.cmdSelectSuppliers = new

14. System.Data.SqlClient.SqlCommand();

15. this.cmdUpdateSuppliers = new

16. System.Data.SqlClient.SqlCommand();

17. this.daSuppliers = new

System.Data.SqlClient.SqlDataAdapter();

These lines instantiate each object using the default constructor.

18. Add the following lines to assign the four command objects to the

daSuppliers DataAdapter:

19. this.daSuppliers.DeleteCommand = this.cmdDeleteSuppliers;

20. this.daSuppliers.InsertCommand = this.cmdInsertSuppliers;

21. this.daSuppliers.SelectCommand = this.cmdSelectSuppliers;

this.daSuppliers.UpdateCommand = this.cmdUpdateSuppliers;

Previewing Results

Visual Studio provides a quick and easy method to check the configuration of a formlevel

DataAdapter: the DataAdapter Preview dialog box.

Preview the Results of a DataAdapter

1. Make sure that daCategories is selected in the component designer.

2. Select Preview Data in the bottom portion of the Properties window.

Visual Studio opens the DataAdapter Preview window.

Microsoft ADO.Net – Step by Step 65

3. Click Fill Dataset.

Visual Studio displays the rows returned by the DataAdapter.

4. Click Close.

Visual Studio closes the DataAdapter Preview window.

DataAdapter Properties

The properties exposed by the DataAdapter are shown in Table 4-1. The

SqlDataAdapter and OleDbDataAdapter objects expose the same set of properties.

Table 4-1: DataAdapter Properties

Property Description

AcceptChangesDuringFill Determines

whether

AcceptChange

s is called on a

Microsoft ADO.Net – Step by Step 66

Table 4-1: DataAdapter Properties

Property Description

DataRow after

it is added to

the DataTable

DeleteCommand The Data

Command

used to delete

rows in the

data source

InsertCommand The Data

Command

used to insert

rows in the

data source

MissingMappingAction Determines the

action that will

be taken when

incoming data

cannot be

matched to an

existing table or

column

MissingSchemaAction Determines the

action that will

be taken when

incoming data

does not match

the schema of

an existing

DataSet

SelectCommand The Data

Command

used to retrieve

rows from the

data source

TableMappings A collection of

DataTableMap

ping objects

that determine

the relationship

between the

columns in a

DataSet and

the data source

UpdateCommand The Data

Command

used to update

rows in the

data source

Note Roadmap

We’ll examine AcceptChanges in Chapter 9.

The AcceptChangesDuringFill property determines whether the AcceptChanges method

is called for each row that is added to a DataSet. The default value is true.

Microsoft ADO.Net – Step by Step 67

The MissingMappingAction property determines how the system reacts when a

SelectCommand returns columns or tables that are not found in the DataSet. The

possible values are shown in Table 4-2. The default value is Passthrough.

Table 4-2: MissingMappingAction Values

Value Description

Error Throws a

SystemExcep

tion

Ignore Ignores any

columns or

tables not

found in the

DataSet

Passthrough The column

or table that is

not found is

added to the

DataSet,

using its

name in the

data source

Similarly, the MissingSchemaAction property determines how the system will respond if

a column is missing in the DataSet. The MissingSchemaAction property will be called

only if the MissingMappingAction is set to Passthrough. The possible values are shown

in Table 4-3. The default value is Add.

Table 4-3: MissingSchemaAction Values

Value Description

Add Adds the

necessary

columns to

the DataSet

AddWithKey Adds both the

necessary

columns and

tables and

PrimaryKey

constraints

Error Throws a

SystemExcep

tion

Ignore Ignores the

extra columns

In addition, the DataAdapter has two sets of properties that we’ll examine in detail: the

set of Command objects that tell it how to update the data source to reflect changes

made to the DataSet and a TableMappings property that maintains the relationship

between columns in a DataSet and columns in the data source.

DataAdapter Commands

As we’ve seen, each DataAdapter contains references to four Command objects, each of

which has a CommandText property that contains the actual SQL command to be

executed.

Microsoft ADO.Net – Step by Step 68

If you create a DataAdapter by using the Data Adapter Configuration Wizard or by

dragging a table, view, or stored procedure from the Server Explorer, Visual Studio will

attempt to automatically generate the CommandText property for each command. You

can also edit the SQL command in the Properties window, although you must first

associate the command with a Connection object.

Note Every DataAdapter command must be associated with a

Connection. In most cases, you will use a single Connection for all

of the commands, but this isn’t a requirement. You can associate a

different Connection with each command, if necessary.

You must specify the CommandText property for the SelectCommand object, but the

.NET Framework can generate the commands for update, insert, and delete if they are

not specified.

Internally, Visual Studio uses the CommandBuilder object to generate commands. You

can instantiate a CommandBuilder object in code and use it to generate commands as

required. However, you must be aware of the CommandBuilder’s limitations. It cannot

handle parameterized stored procedures, for example.

Set CommandText in the Properties Window

1. Select the daProducts object in the form designer, and then in the

Properties window, expand the Select Command properties.

2. Select the SelectCommand’s Connection property, expand the

Existing node in the list, and then choose SqlConnection1.

Microsoft ADO.Net – Step by Step 69

3. Select the CommandText property, and then click the ellipsis

button.Visual Studio opens the Query Builder and opens the Add

Table dialog box.

4. Select the Products table, click Add, and then click Close.

Visual Studio closes the Add Table dialog box and adds the table to the

Query Builder.

5. Add the CategoryID, ProductID, and ProductName columns to the

query by selecting each column’s check box.

6. Click OK.

Microsoft ADO.Net – Step by Step 70

Visual Studio generates the CommandText property.

Set CommandText in Code

Visual Basic .NET

§ In the code editor, add the following lines of code to the bottom of the

New Sub:

§ Me.cmdSelectSuppliers.CommandText = “SELECT * FROM

Suppliers”

Me.cmdSelectSuppliers.Connection = Me.SqlConnection1

Visual C# .NET

§ In the code editor, add the following lines to the bottom of the

DataAdapters Sub:

§ this.cmdSelectSuppliers.CommandText = “SELECT * FROM

Suppliers”;

this.cmdSelectSuppliers.Connection = this.sqlConnection1;

The TableMappings Collection

A DataSet has no knowledge of where the data it contains comes from, and a

Connection has no knowledge of what happens to the data it retrieves. The DataAdapter

maintains the connection between the two. It does this by using the TableMappings

collection.

The structure of the TableMappings collection is shown in the following figure. At the

highest level, the TableMappings collection contains one or more DataTableMapping

objects. Typically, there is only one DataTableMapping object because most

DataAdapters return only a single record set. However, if a DataAdapter manages

multiple record sets, as might be the case with a stored procedure that returns multiple

result sets, there will be a DataTableMapping object for each record set.

The DataTableMapping object is another collection, which contains one or more

DataColumnMapping objects. The DataColumnMapping object consists of two

properties: the SourceColumn, which is the case-sensitive name of the column within the

data source, and the DataSetColumn, which is the case-insensitive name of the column

within the DataSet. There is a DataColumnMapping object for each column managed by

the DataAdapter.

Microsoft ADO.Net – Step by Step 71

By default, the .NET Framework will create a TableMappings collection (and all of the

objects it contains) with the DataSetColumn name set to the SourceColumn name. There

are times, however, when this isn’t what you want. For example, you might want to

change the mappings for reasons of convenience or because you’re working with a preexisting

DataSet with difference column names.

Change a DataSet Column Name Using the Table Mappings Dialog Box

1. Select the daCategories DataAdapter in the form designer.

2. In the Properties window, expand the Mapping properties.

3. Select the TableMappings property and click the ellipsis button.

Visual Studio displays the Table Mappings dialog box.

Microsoft ADO.Net – Step by Step 72

4. Change the name of the Dataset column from CategoryName to

Name.

5. Click OK.

Visual Studio updates the collection.

DataAdapter Methods

The DataAdapter supports two important methods: Fill, which loads data from the data

source into the DataSet, and Update, which transfers data the other direction—loading it

from the DataSet into the data source. We’ll examine both in this set of exercises.

Generating DataSets and Binding Data

Roadmap We’ll examine DataSets in Chapter 6.

Before we can examine the Fill and Update methods, we must create and link the

DataSets to be used to store the data. We haven’t examined DataSets yet (we’ll do that

in Chapter 6), so just follow the steps outlined and try not to worry about them.

Generate and Bind DataSets

1. Select the daCategories DataAdapter in the form designer.

2. On the Data menu, choose Generate Dataset.

Microsoft ADO.Net – Step by Step 73

Visual Studio displays the Generate Dataset dialog box.

3. In the New text box, change the name of the new DataSet to

dsCategories.

4. Click OK.

Visual Studio creates the dsCategories DataSet and adds an instance of it to

the form designer.

5. Repeat steps 1 through 4 for the daProducts DataAdapter. Name the

new DataSet dsProducts.

Microsoft ADO.Net – Step by Step 74

6. Select the dgCategories object in the drop-down list box of the

Properties window.

7. In the Properties window, expand the DataBindings section.

8. Select dsCategories1 in the DataSource list.

Microsoft ADO.Net – Step by Step 75

9. Select Categories in the DataMember list.

Microsoft ADO.Net – Step by Step 76

10. Repeat steps 6 through 9 for the dgProducts control, binding it to the

dsProducts1 DataSource and Table DataMember.

The Fill Method

The Fill method loads data from a data source into one or more tables of a DataSet by

using the command specified in the DataAdapter’s SelectCommand. The

DbDataAdapter object, from which both the OleDbDataAdapter and the SqlDataAdapter

are inherited, supports several variations of the Fill method, as shown in Table 4-4.

Table 4-4: DbDataAdapter Fill Methods

Method Description

Fill(DataSet) Creates a

DataTable

named Table

and populates it

with the rows

returned from

the data source

Fill(DataTable) Fills the

specified

DataTable with

the rows

returned from

the data source

Fill(DataSet, tableName) Fills the

DataTable

named in the

tableName

Microsoft ADO.Net – Step by Step 77

Table 4-4: DbDataAdapter Fill Methods

Method Description

string, within

the DataSet

specified, with

the rows

returned from

the data source

Fill(DataTable, DataReader) Fills the

DataTable

using the

specified

DataReader

(Because

DataReader is

declared as an

IDataReader,

either an

OleDbDataRea

der or a

SQLDataReade

r can be used)

Fill(DataTable, command, CommandBehavior) Fills the

DataTable

using the SQL

string passed in

command and

the specified

CommandBeha

vior

Fill(DataSet, startRecord,maxRecords, tableName) Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set

Fill(DataSet, tableName, DataReader, startRecord,

maxRecords)

Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set,

using the

specified

Microsoft ADO.Net – Step by Step 78

Table 4-4: DbDataAdapter Fill Methods

Method Description

DataReader

(Since

DataReader is

declared as an

IDataReader,

either an

OleDbDataRea

der or a

SQLDataReade

r can be used)

Fill(DataSet, startRecord, maxRecords, tableName,

command, CommandBehavior)

Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set,

using the SQL

text contained

in command

and the

specified

CommandBeha

vior

In addition, the OleDbDataAdapter supports the two additional versions of the Fill

method shown in Table 4-5, which are used to load data from Microsoft ActiveX Data

Objects (ADO).

Table 4-5: OleDbDataAdapter Fill Methods

Method Description

Fill(DataTable, adoObject) Fills the

specified

DataTable

with rows

from the

ADO

Recordset

or Record

object

specified in

adoObject

Fill(DataSet, adoObject, tableName) Fills the

specified

DataTable

with rows

from the

ADO

Recordset

or Record

Microsoft ADO.Net – Step by Step 79

Table 4-5: OleDbDataAdapter Fill Methods

Method Description

object

specified in

adoObject,

using the

DataTable

specified in

the

tableName

string to

determine

the

TableMappi

ngs

The SqlDataAdapter supports only the methods provided by the DbDataAdapter.

DataAdapters included in other Data Providers can, of course, support additional

versions of the Fill method.

Important The Microsoft SQL Server decimal data type allows a

maximum of 38 significant digits, while the .NET Framework

decimal type only allows a maximum of 28. If a row in a SQL

table contains a decimal field with more than 28 significant

digits, the row will not be added to the DataSet and a

FillError will be raised.

Use the Fill Method

Visual Basic .NET

1. Press F7 to display the code editor for the DataAdapters form.

2. Select btnFill in the ClassName list, and then select Click in the

MethodName list.

Visual Studio displays the Click event handler template.

3. Add the following lines of code to clear each dataset to the sub:

4. Me.dsCategories1.Clear()

Me.dsProducts1.Clear()

5. Add the following code to fill each DataSet from the DataAdapters:

6. Me.daCategories.Fill(Me.dsCategories1.Categories)

7. Me.daProducts.Fill(Me.dsProducts1.Table)

8. Press F5 to run the program.

Microsoft ADO.Net – Step by Step 80

9. Click Fill.

10. Verify that each of the data grids has been filled correctly, and then

close the application.

Visual C# .NET

1. Double-click the Fill button.

Visual Studio adds a Click event handler to the code window.

2. Add the following code to the event handler:

3. private void btnFill_Click(object sender, System.EventArgs e)

4. {

5. this.dsCategories1.Clear();

6. this.dsProducts1.Clear();

}

These lines clear the contents of each DataSet.

7. Add the following code to fill each DataSet from the DataAdapters:

8. this.daCategories.Fill(this.dsCategories1.Categories);

this.daProducts.Fill(this.dsProducts1._Table);

9. Press F5 to run the program.

Microsoft ADO.Net – Step by Step 81

10. Click Fill.

11. Verify that each of the data grids has been filled correctly, and then

close the application.

The Update Method

Remember that the DataSet doesn’t retain any knowledge about the source of the data it

contains, and that the changes you make to DataSet rows aren’t automatically

propagated back to the data source. You must use the DataAdapter’s Update method to

do this. The Update method calls the DataAdapter’s InsertCommand, DeleteCommand,

or UpdateCommand, as appropriate, for each row in a DataSet that has changed.

The System.Data.Common.DbDataAdapter, which you will recall is the DataAdapter

class from which relational database Data Providers inherit their DataAdapters, supports

a number of versions of the Update method, as shown in Table 4-6. Neither the

SqlDataAdapter nor the OleDbDataAdapter add any additional versions.

Table 4-6: DbDataAdapter Update Methods

Method Description

Update(DataSet) Updates the

data source

from a

DataTable

named Table in

the specified

Microsoft ADO.Net – Step by Step 82

Table 4-6: DbDataAdapter Update Methods

Method Description

DataSet

Update(dataRows) Updates the

data source

from the

specified array

of dataRows

Update(DataTable) Updates the

data source

from the

specified

DataTable

Update(dataRows, DataTableMapping) Updates the

data source

from the

specified array

of dataRows,

using the

specified

DataTableMap

ping

Update(DataSet, sourceTable) Updates the

data source

from the

DataTable

specified in

sourceTable in

the specified

DataSet

Update a Data Source Using the Update Method

Visual Basic .NET

1. In the code editor, select the btnUpdate control in the ControlName

list, and then select the Click event in the MethodName list.

Visual Studio displays the Click event handler template.

2. Add the following code to call the Update method:

Me.daCategories.Update(Me.dsCategories1.Categories)

3. Press F5 to run the application.

4. Click Fill.

The application fills the data grids.

Tip You can drag the data grid’s column headings to widen them.

5. Click the CategoryName of the first row, and then change its value

from Beverages to Old Beverages.

Microsoft ADO.Net – Step by Step 83

6. Click Update.

The application updates the data source.

7. Click Fill to ensure that the change has been propagated to the data

source.

8. Close the application.

Visual C# .NET

1. Add the following event handler in the code editor, below the

btnFill_Click handler we added in the previous exercise:

2. private void btnUpdate_Click (object sender, System.EventArgs

e)

3. {

4. this.daCategories.Update(this.dsCategories1.Categories);

}

5. Add the following code to connect the event handler in the class

definition:

6. this.btnUpdate.Click += new

EventHandler(this.btnUpdate_Click);

7. Press F5 to run the application.

8. Click Fill.

The application fills the data grids.

Tip You can drag the data grid’s column headings to widen them.

9. Click the CategoryName of the first row, and then change its value

from Beverages to Old Beverages.

Microsoft ADO.Net – Step by Step 84

10. Click Update.

The application updates the data source.

11. Click Fill to ensure that the change has been propagated to the data

source.

12. Close the application.

Handling DataAdapter Events

Other than the events caused by errors, the DataAdapter supports only two events:

OnRowUpdating and OnRowUpdated. These two events occur on either side of the

actual dataset update, providing fine control of the process.

OnRowUpdating Event

The OnRowUpdating event is raised after the Update method has set the parameter

values of the command to be executed but before the command is executed. The event

handler for this event receives an argument whose properties provide essential

information about the command that is about to be executed.

The class of the event arguments is defined by the Data Provider, so it will be either

OleDbRowUpdatingEventArgs or SqlRowUpdatingEventArgs if one of the .NET

Framework Data Providers is used. The properties of RowUpdatingEventArgs are shown

in Table 4-7.

Table 4-7: RowUpdatingEventArgs Properties

Properties Description

Command The Data

Command to

be executed

Errors The errors

generated by

the .NET Data

Provider

Row The

DataReader to

be updated

StatementType The type of

Command to

be executed.

The possible

values are

Microsoft ADO.Net – Step by Step 85

Table 4-7: RowUpdatingEventArgs Properties

Properties Description

Select, Insert,

Delete, and

Update

Status The

UpdateStatus

of the

Command

TableMapping The

DataTableMap

ping used by

the update

The Command property contains a reference to the actual Command object that will be

used to update the data source. Using this reference, you can, for example, examine the

Command’s CommandText property to determine the SQL that will be executed and

change it if necessary.

The StatementType property of the event argument defines the action that is to be

performed. The property is an enumeration that can evaluate to Select, Insert, Update, or

Delete. The StatementType property is read-only, so you cannot use it to change the

type of action to be performed.

The Row property contains a read-only reference to the DataRow to be propagated to

the data source, while the TableMapping property contains a reference to the

DataTableMapping that is being used for the update.

When the event handler is first called, the Status property, which is an UpdateStatus

enumeration, defines the status of the event. If it is ErrorsOccurred, the Errors property

will contain a collection of Errors.

You can set the Status property within the event handler to determine what action the

system is to take. In addition to ErrorsOccured, which causes an exception to be thrown,

the possible exit status values are Continue, SkipAllRemainingRows, and

SkipCurrentRow. Continue, which is the default value, does exactly what you would

expect—it instructs the system to continue processing. SkipAllRemainingRows actually

discards the update to the current row, as well as any remaining unprocessed rows,

while SkipCurrentRow only cancels processing for the current row.

Respond to an OnRowUpdating Event

Visual Basic .NET

1. In the code editor, select daCategories in the ControlName list and

then select RowUpdating in the MethodName list.

Visual Studio displays the RowUpdating event handler template.

2. Add the following text to the Messages control to indicate that the

event has been triggered:

Me.txtMessages.Text &= vbCrLf & “Beginning Update…”

3. Press F5 to run the application, and then click Fill to fill the data grids.

4. Change the CategoryName for Category 1, which we changed to Old

Beverages in the previous exercise, back to Beverages.

Microsoft ADO.Net – Step by Step 86

5. Click Update.

The application updates the text in the Messages control.

6. Close the application.

Visual C# .NET

1. Add the following event handler in the code editor:

2. private void daCategories_RowUpdate(object sender,

3. System.Data.SqlClient.SqlRowUpdatedEventArgs e)

4. {

5. string strMsg;

6.

7. strMsg = “Beginning update…”;

8. this.txtMessages.Text += strMsg;

}

The code adds text to the Messages control to indicate that the event has

been triggered.

9. Add the following code to connect the event handler in the class

description:

10. this.daCategories.RowUpdating += new

11. System.Data.SqlClient.SqlRowUpdatingEventHandler

(this.daCategories_RowUpdating);

Microsoft ADO.Net – Step by Step 87

12. Press F5 to run the application, and then click Fill to fill the data

grids.

13. Change the CategoryName for Category 1, which we changed to

Old Beverages in the previous exercise, back to Beverages.

14. Click Update.

The application updates the text in the Messages control.

15. Close the application.

Examine the RowUpdatingEventArgs Properties

Visual Basic .NET

1. Add the following lines to the daCategories_RowUpdating event

handler that you created in the previous exercise:

2. Me.txtMessages.Text &= vbCrLf & (“Executing a command of

type ” _

& e.StatementType.ToString)

3. Press F5 to run the application, and then click Fill.

4. Change the CategoryName of Category 1 to New Beverages, and

then click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 88

5. Close the application.

Visual C# .NET

1. Change the da_Categories_RowUpdated event handler that you

created in the previous exercise to read:

2. string strMsg;

3.

4. strMsg = “\nUpdate Completed.”;

5. strMsg += “, ” + e.RecordsAffected.ToString();

6. strMsg += ” records(s) updated.”;

7. this.txtMessages.Text += strMsg;

8.

9. Press F5 to run the application, and then click Fill.

10. Change the CategoryName of Category 1 to New Beverages, and

then click Update.

The application updates the text in the Messages control.

11. Close the application.

Microsoft ADO.Net – Step by Step 89

OnRowUpdated Event

The OnRowUpdated event is raised after the Update method executes the appropriate

command against the data source. The event handler for this event is either passed an

SqlRowUpdatedEventArgs or an OleDbRowUpdatedEventArgs argument, depending on

the Data Provider.

Either way, the event argument contains all of the same properties as the

RowUpdatingEvent argument, plus an additional property, a read-only RecordsEffected

argument that indicates the number of rows that were changed, inserted, or deleted by

the SQL command that was executed.

Respond to an OnRowUpdated Event

Visual Basic .NET

1. Select daCategories in the ControlName list and then select

RowUpdated in the MethodName list.

Visual Studio displays the RowUpdated event handler template.

2. Add the following text to the Messages control to indicate that the

event has been triggered:

Me.txtMessages.Text &= vbCrLf & “Update completed”

3. Press F5 to run the application, and then click Fill to fill the data grids.

4. Change the CategoryName for Category 1, which we changed to New

Beverages in the previous exercise, back to Beverages.

5. Click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 90

6. Close the application.

Visual C# .NET

1. Add the following code to add the RowUpdated event template to the

code editor:

2. private void daCategories_RowUpdate(object sender,

3. System.Data.SqlClient.SqlRowUpdatedEventArgs e)

4. {

5. string strMsg;

6.

7. strMsg = “\nUpdate Completed.”;

8. this.txtMessages.Text += strMsg;

}

9. Add the following code to connect the event handler in the class

description:

10. this.daCategories.RowUpdated +=

11. new System.Data.SqlClient.SqlRowUpdatedEventHandler

(this.daCategories_RowUpdated);

12. Press F5 to run the application, and then click Fill to fill the data

grids.

13. Change the CategoryName for Category 1, which we changed to

New Beverages in the previous exercise, back to Beverages.

Microsoft ADO.Net – Step by Step 91

14. Click Update.

The application updates the text in the Messages control.

15. Close the application.

Examine the RowUpdatedEventArgs Properties

Visual Basic .NET

1. Add the following lines to the daCategories_RowUpdated event

handler that you created in the previous exercise:

2. Me.txtMessages.Text &= “, ” & e.RecordsAffected.ToString & ”

record(s) updated.”

3. Press F5 to run the application, and then click Fill.

4. Change the CategoryName of Category 1 to Beverages 2, and then

click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 92

5. Close the application.

Visual C# .NET

1. Change the daCategories_RowUpdated event handler that you

created in the previous exercise to read:

2. string strMsg;

3.

4. strMsg = “\nUpdate Completed.”;

5. strMsg += “, ” + e.RecordsAffected.ToString();

6. strMsg += ” records(s) updated.”;

7. this.txtMessages.Text += strMsg;

8. Press F5 to run the application, and then click Fill.

9. Change the CategoryName of Category 1 to Beverages 2, and then

click Update.

The application updates the text in the Messages control.

10. Close the application.

Microsoft ADO.Net – Step by Step 93

Chapter 4 Quick Reference

To Do this

Create a DataAdapter in the Server Explorer Drag a table

into the form

designer.

Create a DataAdapter using the Toolbox Drag an

OleDbDataAda

pter or an

SqlDataAdapte

r onto the form

designer.

Cancel the

Data Adapter

Configuration

Wizard if you

wish to

configure the

DataAdapter

manually.

Create a DataAdapter in code Declare the

DataAdapter

variable and

the four

Command

object

variables, and

then instantiate

them and

assign the

Command

objects to the

DataAdapter.

Preview the results of a DataAdapter Select the

DataAdapter in

the form

designer, and

then click

Preview

Dataset in the

Properties

window.

Chapter 5: Transaction Processing in ADO.NET

Overview

In this chapter, you’ll learn how to:

§ Create a transaction

§ Create a nested transaction

§Commit a transaction

§ Rollback a transaction

Microsoft ADO.Net – Step by Step 94

In the last few chapters, we’ve seen how ADO.NET data provider objects interact in the

process of editing and updating. In this chapter, we’ll complete our examination of data

providers in ADO.NET with an exploration of transaction processing.

Understanding Transactions

A transaction is a series of actions that must be treated as a single unit of work—either

they must all succeed, or they must all fail. The classic example of a transaction is the

transfer of funds from one bank account to another. To transfer the funds, an amount,

say $100, is withdrawn from one account and deposited in the other. If the withdrawal

were to succeed while the deposit failed, money would be lost into cyberspace. If the

withdrawal were to fail and the deposit succeed, money would be invented. Clearly, if

either action fails, they must both fail.

ADO.NET supports transactions through the Transaction object, which is created against

an open connection. Commands that are executed against the connection while the

transaction is pending must be enrolled in the transaction by assigning a reference to the

Transaction object to their Transaction property. Commands cannot be executed against

the Connection outside the transaction while it is pending.

If the transaction is committed, all of the commands that form a part of that transaction

will be permanently written to the data source. If the transaction is rolled back, all of the

commands will be discarded at the data source.

Creating Transactions

The Transaction object is implemented as part of the data provider. There is a version for

each of the intrinsic data providers: OleDbTransaction in the System.Data.OleDb

namespace and SqlTransaction in the System.Data.SqlClient namespace.

The SqlTransaction object is implemented using Microsoft SQL Server transactions—

creating a SqlTransaction maps directly to the BeginTransaction statement. The

OleDbTransaction is implemented within OLE DB. No matter which data provider you

use, you shouldn’t explicitly issue BeginTransaction commands on the database.

Creating New Transactions

Transactions are created by calling the BeginTransaction method of the Connection

object, which returns a reference to a Transaction object. BeginTransaction is

overloaded, allowing an IsolationLevel to optionally be specified, as shown in Table 5-1.

The Connection must be valid and open when BeginTransaction is called.

Table 5-1: Connection BeginTransaction Methods

Method Description

BeginTransaction() Begins a

transaction

BeginTransaction Begins a

transaction

at the

specified

IsolationLev

el

(IsolationLevel)

Because SQL Server supports named transactions, the SqlClient data provider exposes

two additional versions of BeginTransaction, as shown in Table 5-2.

Table 5-2: Additional SQL BeginTransaction Methods

Method Description

BeginTransaction (TransactionName) Begins a

Microsoft ADO.Net – Step by Step 95

Table 5-2: Additional SQL BeginTransaction Methods

Method Description

transaction

with the name

specified in

the

TransactionN

ame string

BeginTransaction (IsolationLevel, TransactionName) Begins a

transaction at

the specified

IsolationLevel

with the name

specified in

the

TransactionN

ame string

ADO Unlike ADO, the ADO.NET Commit and Rollback methods are

exposed on the Transaction object, not the Command object.

The optional IsolationLevel parameter to the BeginTransaction method specifies the

connection’s locking behavior. The possible values for IsolationLevel are shown in Table

5-3.

Table 5-3: Isolation Levels

Value Meaning

Chaos Pending

changes

from

more

highly

ranked

transacti

ons

cannot

be

overwritt

en

ReadCommitted Shared

locks are

held

while the

data is

being

read, but

data can

be

changed

before

the end

of the

transacti

on

ReadUncommitted No

shared

locks are

issued

Microsoft ADO.Net – Step by Step 96

Table 5-3: Isolation Levels

Value Meaning

and no

exclusive

locks are

honored

RepeatableRead Exclusive

locks are

placed

on all

data

used in

the query

Serializable A range

lock is

placed

on the

DataSet

Unspecified An

existing

isolation

level

cannot

be

determin

ed

Create a New Transaction

Visual Basic .NET

1. Open the Transactions project from the Microsoft Visual Studio .NET

Start Page or by using the File menu.

2. Double-click Transactions.vb to display the form in the form designer.

3. Double-click Create.

Visual Studio opens the code editor window and adds the Click event handler.

4. Add the following code to the procedure:

5. Dim strMsg As String

6. Dim trnNew As System.Data.OleDb.OleDbTransaction

Microsoft ADO.Net – Step by Step 97

7.

8. Me.cnAccessNwind.Open()

9. trnNew = Me.cnAccessNwind.BeginTr ansaction()

10. strMsg = “Isolation Level: ”

11. strMsg &= trnNew.IsolationLevel.ToString

12. MessageBox.Show(strMsg)

Me.cnAccessNwind.Close()

The code creates a new Transaction using the default method, and then

displays its IsolationLevel in a message box.

13. Press F5 to run the application.

14. Click Load Data.

The application fills the DataSet and displays the Customers and Orders lists.

15. Click Create.

The application displays the transaction’s IsolationLevel in a message box.

Microsoft ADO.Net – Step by Step 98

16. Click OK in the message box, and then close the application.

Visual C# .NET

1. Open the Transactions project from the Visual Studio Start Page or by

using the File menu.

2. Double-click Transactions.cs to display the form in the form designer.

3. Double-click Create.

Visual Studio opens the code editor window and adds the Click event handler.

4. Add the following code to the procedure:

5. string strMsg;

6. System.Data.OleDb.OleDbTransaction trnNew;

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. strMsg = “Isolation Level: “;

11. strMsg += trnNew.IsolationLevel.ToString();

12. MessageBox.Show(strMsg);

this.cnAccessNwind.Close();

The code creates a new Transaction using the default method, and then

displays its IsolationLevel in a message box.

13. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 99

14. Click Load Data.

The application fills the DataSet and displays the Customers and Orders lists.

15. Click Create.

The application displays the transaction’s IsolationLevel in a message box.

16. Click OK in the message box, and then close the application.

Microsoft ADO.Net – Step by Step 100

Creating Nested Transactions

Although it isn’t possible to have two transactions on a single Connection, the

OleDbTransaction object supports nested transactions. (They aren’t supported on SQL

Server.)

ADO Multiple transactions on a single Connection, which were

supported in ADO, are no longer supported in ADO.NET.

The syntax for creating a nested transaction is the same as that for creating a first-level

transaction, as shown in Table 5-4. The difference is that nested transactions are

created by calling the BeginTransaction method on the Transaction object itself, not on

the Connection.

All nested transactions must be committed or rolled back before the trans-action

containing them is committed; however, if the parent (containing) transaction is rolled

back, the nested transactions will also be rolled back, even if they have previously been

committed.

Table 5-4: Transaction BeginTransaction Methods

Method Description

BeginTransaction() Begins a

transaction

BeginTransaction (IsolationLevel) Begins a

transaction

at the

specified

IsolationLev

el

Create a Nested Transaction

Visual Basic .NET

1. Select btnNested in the code editor’s ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim strMsg As String

4. Dim trnMaster As System.Data.OleDb.OleDbTransaction

5. Dim trnChild As System.Data.OleDb.OleDbTransaction

6.

7. Me.cnAccessNwind.Open()

8.

9. trnMaster = Me.cnAccessNwind.BeginTransaction

10.

11. trnChild = trnMaster.Begin

12. strMsg = “Child Isolation Level: ”

13. strMsg &= trnChild.IsolationLevel.ToString

14. MessageBox.Show(strMsg)

15.

Me.cnAccessNwind.Close()

The code first creates a transaction, trnMaster, on the Connection object. It

then creates a second, nested transaction, trnChild, on the trnMaster

transaction, and displays its IsolationLevel in a message box.

Microsoft ADO.Net – Step by Step 101

16. Press F5 to run the application.

17. Click Load Data.

18. Click Create Nested.

The application displays the child transaction’s IsolationLevel in a message

box.

19. Click OK in the message box, and then close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnNested_Click(object sender, System.EventArgs e)

3. {

4. string strMsg;

5. System.Data.OleDb.OleDbTransaction trnMaster;

6. System.Data.OleDb.OleDbTransaction trnChild;

7.

8. this.cnAccessNwind.Open();

9.

10. trnMaster =

this.cnAccessNwind.BeginTransaction();

11.

12. trnChild = trnMaster.Begin();

13. strMsg = “Child Isolation Level: “;

14. strMsg += trnChild.IsolationLevel.ToString();

15. MessageBox.Show(strMsg);

16.

17. this.cnAccessNwind.Close();

}

The code first creates a transaction, trnMaster, on the Connection object. It

then creates a second, nested transaction, trnChild, on the trnMaster

transaction, and displays its IsolationLevel in a message box.

18. Add the code to bind the click handler to the top of the

frmTransactions() sub:

19. this.btnNested.Click += new

EventHandler(this.btnNested_Click);

20. Press F5 to run the application.

21. Click Load Data.

22. Click Create Nested.

The application displays the child transaction’s IsolationLevel in a message

box.

Microsoft ADO.Net – Step by Step 102

23. Click OK in the message box, and then close the application.

Using Transactions

There are three steps to using transactions after they are created. First they are

assigned to the commands that will participate in them, then the commands are

executed, and finally the transaction is closed by either committing it or rolling it back.

Assigning Transactions to a Command

Once a transaction has been begun on a connection, all commands executed against

that connection must participate in that transaction. Unfortunately, this doesn’t happen

automatically—you must set the Transaction property of the command to reference the

transaction.

However, once the transaction is committed or rolled back, the transaction reference in

any commands that participated in the transaction will be reset to Nothing, so it isn’t

necessary to do this step manually.

Committing and Rolling Back Transactions

The final step in transaction processing is to commit or roll back the changes that were

made by the commands participating in the transaction. If the transaction is committed,

all of the changes will be accepted in the data source. If it is rolled back, all of the

changes will be discarded, and the data source will be returned to the state it was in

before the transaction began.

Transactions are committed using the transaction’s Commit method and rolled back

using the transaction’s Rollback method, neither of which takes any parameters. The

actions are typically wrapped in a Try…Catch block.

Commit a Transaction

Visual Basic .NET

1. Select btnCommit in the ControlName list, and then select Click in the

MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following lines to the procedure:

3. Dim trnNew As System.Data.OleDb.OleDbTransaction

4.

5. AddRows(“AAAA1”)

Microsoft ADO.Net – Step by Step 103

6.

7. Me.cnAccessNwind.Open()

8. trnNew = Me.cnAccessNwind.BeginTransaction()

9. Me.daCustomers.InsertCommand.Transaction = trnNew

10. Me.daOrders.InsertCommand.Transaction = trnNew

11. Try

12.

Me.daCustomers.Update(Me.dsCustomerOrders1.CustomerList)

13. Me.daOrders.Update(Me.dsCustomerOrders1.Orders)

14. trnNew.Commit()

15. MessageBox.Show(“Transaction Committed”)

16. Catch err As System.Data.OleDb.OleDbException

17. trnNew.Rollback()

18. MessageBox.Show(err.Message.ToString)

19. Finally

20. Me.cnAccessNwind.Close()

End Try

The AddRows procedure, which is provided in Chapter 1, adds a Customer

row and an Order for that Customer.

Within a Try…Catch block, the code commits the two Update commands if

they succeed, and then displays a message confirming that the transaction

has completed without errors.

21. Press F5 to run the application.

22. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

23. Click Commit.

The application displays a message box confirming the updates.

24. Click OK in the message box, and then click Load Data to confirm

that the rows have been added.

Microsoft ADO.Net – Step by Step 104

25. Close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnCommit_Click(object sender, System.EventArgs

e)

3. {

4. System.Data.OleDb.OleDbTransaction trnNew;

5.

6. AddRows(“AAAA1”);

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. this.daCustomers.InsertCommand.Transaction =

trnNew;

11. this.daOrders.InsertCommand.Transaction =

trnNew;

12. try

13. {

this.daCustomers.Update(this.dsCustomerOrders1.CustomerLis

t);

14.

this.daOrders.Update(this.dsCustomerOrders1.Orders);

15. trnNew.Commit();

16. MessageBox.Show(“Transaction Committed”);

17. }

18. catch (System.Data.OleDb.OleDbException err)

19. {

20. trnNew.Rollback();

21. MessageBox.Show(err.Message.ToString());

22. }

23. finally

24. {

25. this.cnAccessNwind.Close();

26. }

}

The AddRows procedure, which is provided in Chapter 1, adds a Customer

row and an Order for that Customer.

Within a Try…Catch block, the code commits the two Update commands if

they succeed, and then displays a message confirming that the transaction

has completed without errors.

27. Add the code to bind the click handler to the top of the

frmTransactions() sub:

Microsoft ADO.Net – Step by Step 105

this.btnCommit.Click += new EventHandler(this.btnCommit_Click);

28. Press F5 to run the application.

29. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

30. Click Commit.

The application displays a message box confirming the updates.

31. Click OK in the message box, and then click Load Data to confirm

that the rows have been added.

32. Close the application.

Rollback a Transaction

Visual Basic .NET

1. Select btnRollback in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following lines to the procedure:

3. Dim trnNew As System.Data.OleDb.OleDbTransaction

4.

5. AddRows(“AAAA2”)

6.

7. Me.cnAccessNwind.Open()

Microsoft ADO.Net – Step by Step 106

8. trnNew = Me.cnAccessNwind.BeginTransaction()

9. Me.daCustomers.InsertCommand.Transaction = trnNew

10. Me.daOrders.InsertCommand.Transaction = trnNew

11. Try

12. Me.daOrders.Update(Me.dsCustomerOrders1.Orders)

13.

Me.daCustomers.Update(Me.dsCustomerOrders1.CustomerList)

14. trnNew.Commit()

15. MessageBox.Show(“Transaction Committed”)

16. Catch err As System.Data.OleDb.OleDbException

17. trnNew.Rollback()

18. MessageBox.Show(err.Message.ToString)

19. Finally

20. Me.cnAccessNwind.Close()

End Try

This procedure is almost identical to the Commit procedure in the previous

exercise. However, because the order of the Updates is reversed so that the

Order is added before the Customer, the first Update will fail and a message

box will display the error.

21. Press F5 to run the application.

22. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

23. Click Rollback.The application displays a message box explaining

the error.

Microsoft ADO.Net – Step by Step 107

24. Click OK to close the message box, and then click Load Data to

confirm that the rows have not been added.

25. Close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnRollback_Click(object sender, System.EventArgs

e)

3. {

4. System.Data.OleDb.OleDbTransaction trnNew;

5.

6. AddRows(“AAAA2”);

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. this.daCustomers.InsertCommand.Transaction =

trnNew;

11. this.daOrders.InsertCommand.Transaction =

trnNew;

12. try

13. {

14.

this.daOrders.Update(this.dsCustomerOrders1.Orders);

15.

this.daCustomers.Update(this.dsCustomerOrders1.CustomerLis

t);

16. trnNew.Commit();

17. MessageBox.Show(“Transaction Committed”);

18. }

19. catch (System.Data.OleDb.OleDbException err)

20. {

21. trnNew.Rollback();

22. MessageBox.Show(err.Message.ToString());

23. }

24. finally

25. {

26. this.cnAccessNwind.Close();

27. }

}

This procedure is almost identical to the Commit procedure in the previous

exercise. However, because the order of the Updates is reversed so that the

Order is added before the Customer, the first Update will fail and a message

box will display the error.

Microsoft ADO.Net – Step by Step 108

28. Add the code to bind the click handler to the top of the

frmDataSets() sub:

this.btnRollback.Click += new EventHandler(this.btnRollback_Click);

29. Press F5 to run the application.

30. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

31. Click Rollback.

The application displays a message box explaining the error.

32. Click OK to close the message box, and then click Load Data to

confirm that the rows have not been added.

33. Close the application.

Microsoft ADO.Net – Step by Step 109

Chapter 5 Quick Reference

To Do this

Create a transaction Call the BeginTransaction

method of the Connection

object:

myTrans =

myConn.BeginTransaction

Create a nested transaction Call the BeginTransaction

method of the Transaction

object:

nestedTrans =

myTrans.BeginTransactio

n()

Commit a transaction Call the Commit method of the

Transaction:

myTrans.Commit()

Rollback a transaction Call the Rollback method of the

Transaction:

myTrans.Rollback()

Part III: Manipulating Data

Chapter 6: The DataSet

Chapter 7: The DataTable

Chapter 8: The DataView

Chapter 6: The DataSet

Overview

In this chapter, you’ll learn how to:

§ Create Typed and Untyped DataSets

§ Add DataTables to DataSets

§Add DataRelations to DataSets

§ Clone and copy DataSets

Beginning with this chapter, we’ll move away from the ADO.NET Data Providers to

examine the objects that support the manipulation of data in your applications. We’ll start

with the DataSet, the memory-resident structure that represents relational data.

Note In this chapter, we’ll begin an application that we’ll continue to

work with in subsequent chapters.

Understanding DataSets

The structure of the DataSet is shown in the following figure.

Microsoft ADO.Net – Step by Step 110

ADO.NET supports two distinct kinds of DataSets: Typed and Untyped. Architecturally,

an Untyped DataSet is a direct instantiation of the System.Data.DataSet object, while a

Typed DataSet is a distinct class that inherits from System.Data.DataSet.

In functional terms, a Typed DataSet exposes its tables, and the columns within them, as

object properties. This makes manipulating the DataSet far simpler syntactically because

you can reference tables and columns directly by their names.

For example, given a Typed DataSet called dsOrders that contains a DataTable called

OrderHeaders, you can reference the value of the OrderID column in the first row as:

Me.dsOrders.OrderHeaders(0).OrderID

If you were working with an Untyped DataSet with the same structure, however, you

would need to reference the OrderHeaders DataTable and OrderID Column through the

Tables and Item collections, respectively:

Me.dsOrders.Tables(“OrderHeader”).Rows(0).Item(“OrderID”)

If you’re working in Microsoft Visual Studio, the Visual Studio code editor supports a

Typed DataSet’s tables and columns through IntelliSense, which makes the reference

even easier.

The Typed DataSet provides another important benefit: it allows compile-time type

checking of data values, which is referred to as strong typing. For example, assuming

that OrderTotal is numeric, the compiler would generate an error in the following line:

Me.dsOrders.OrderHeader.Rows(0).OrderTotal = “Hello, world”

But if you were working with an Untyped DataSet, the following line would compile

without error:

Me.dsOrders.Tables(“OrderHeader”).Rows(0).Item(“OrderTotal”) = “Hello, world”

Despite the advantages of the Typed DataSet, there are times when you’ll need an

Untyped DataSet. For example, your application may receive a DataSet from a middletier

component or a Web service, and you won’t know the structure of the DataSet until

run time. Or you may need to reconfigure a DataSet’s schema at run time, in which case

regenerating a Typed DataSet would be an unnecessary overhead.

Creating DataSets

As always, Visual Studio provides several different methods for creating DataSets, both

interactively and programmatically.

Microsoft ADO.Net – Step by Step 111

Creating Typed DataSets

Roadmap We’ll explore the XML SchemaDesigner in Chapter 13.

In previous chapters, we created Typed DataSets from DataAdapters by using the

Generate Dataset command. In this chapter, we’ll use the Component Designer. You

can also create them programmatically and by using the XML Schema Designer. We’ll

examine both of those techniques in detail in Part V. We will, however, use the Schema

Designer in this chapter to confirm our changes.

Create a Typed DataSet Using the Component Designer

1. Open the DataSets project from the Start page or the Project menu.

2. Double-click DataSets.vb (or DataSets.cs, if you’re using C#) in the

Solution Explorer.

Visual Studio opens the form in the form designer.

3. Select the daCustomers DataAdapter in the Component Designer.

4. Choose Generate Dataset from the Data menu.

The Generate Dataset dialog box opens.

Microsoft ADO.Net – Step by Step 112

5. In the New text box, change the name of the new DataSet to

dsMaster.

6. Click OK.

Visual Studio creates a Typed DataSet and adds an instance of it to the

Component Designer.

The DataSet object’s Tables collection, being a collection, can contain multiple

DataTables, and the Visual Studio Generate Dataset dialog box allows you to add the

result sets returned by a DataAdapter to an existing DataSet.

Because all of the result sets returned by the defined DataAdapters are displayed in the

Generate Dataset dialog box, you can add them all in a single operation by selecting the

check boxes next to their names.

Add a DataTable to an Existing Typed DataSet

1. Select daOrders in the Component Designer.

Microsoft ADO.Net – Step by Step 113

2. Choose Generate dataset from the Data Menu.

Visual Studio displays the Generate dataset dialog.

3. Verify that the default option to add the DataTable to the existing

dsMaster DataSet is selected, and then click OK.

Visual Studio adds the DataTable to dsMaster.

4. Select dsMaster in the Component Designer, and then click View

Schema at the bottom of the Properties window.

Visual Studio opens the XML Schema Designer.

5. Verify that the DataSet contains both DataTables, and then close the

XML Schema Designer.

Creating Untyped DataSets

You can create Untyped DataSets both interactively in Visual Studio and

programmatically at run time. Within Visual Studio, you can create both Typed and

Untyped DataSets by dragging the DataSet control from the Toolbox.

Create an Untyped DataSet Using Visual Studio

1. Drag a DataSet control from the Data tab of the Toolbox onto the form.

Visual Studio displays the Add Dataset dialog.

Microsoft ADO.Net – Step by Step 114

2. Select the Untyped dataset option, and then click OK.

Visual Studio adds the DataSet to the Component Designer.

3. In the Properties window, change both the DataSetName property and

the Name property to dsUntyped.

The DataSet object supports three versions of the usual New constructor to create an

Untyped DataSet in code, as shown in Table 6-1. Only the first two are typically used in

application programs.

Table 6-1: DataSet Constructors

Method Description

New() Creates an

Untyped

DataSet

with the

default

name

NewDataSet

New(dsName) Creates an

Untyped

DataSet

with the

name

passed in

the dsName

string

New(SerializationInfo, StreamingContext) Used

Microsoft ADO.Net – Step by Step 115

Table 6-1: DataSet Constructors

Method Description

internally by

the .NET

Framework

Create an Untyped DataSet at Run Time

Visual Basic .NET

1. Press F7 to open the code editor.

2. Expand the region labeled Windows Form Designer generated code,

and then scroll to the bottom of the class-level declarations.

3. Add the following declaration to the end of the section:

Dim dsEmployees As New System.Data.DataSet(“dsEmployees”)

Visual C# .NET

1. Press F7 to open the code editor.

2. Add the following declaration to the beginning of the class declaration:

private System.Data.DataSet dsEmployees;

3. Add the following instantiation to the frmDataSets sub, after the call to

InitializeComponent:

dsEmployees = new System.Data.DataSet(“dsEmployees”);

DataSet Properties

The properties exposed by the DataSet object are shown in Table 6-2.

Table 6-2: DataSet Properties

Property Value

CaseSensitive Determines

whether

compariso

ns are

casesensitive

DataSetName The name

used to

reference

the

DataSet in

code

DefaultViewManager Defines the

default

filtering

and sorting

order of the

DataSet

EnforceConstraints Determines

whether

constraint

rules are

followed

during

changes

Microsoft ADO.Net – Step by Step 116

Table 6-2: DataSet Properties

Property Value

ExtendedProperties Custom

user

information

HasErrors Indicates

whether

any of the

DataRows

in the

DataSet

contain

errors

Locale The locale

information

to be used

when

comparing

strings

Namespace The

namespac

e used

when

reading or

writing an

XML

document

Prefix An XML

prefix used

as an alias

for the

namespac

e

Relations A collection

of

DataRelati

on objects

that define

the

relationship

of the

DataTables

within the

DataSet

Tables The

collection

of

DataTables

contained

in the

DataSet

Roadmap We’ll examine the DataSet’s XML-related methods in Chapter

14.

Microsoft ADO.Net – Step by Step 117

The majority of properties supported by the DataSet are related to its interaction with

XML. We’ll examine these properties in Chapter 14. Of the non-XML properties, the two

most important are the Tables and Relations collections, which contain and define the

data maintained within the DataSet.

The DataSet Tables Collection

Roadmap We’ll examine the properties and methods of DataTables in

detail in Chapter 5.

For Typed DataSets, the contents of the DataSet’s Tables collection are defined by the

DataSet schema. For Untyped DataSets, you can create the tables and their columns

either programmatically or through the Visual Studio designers.

Add a DataTable to an Untyped DataSet Using Visual Studio

1. Select the dsUntyped DataSet in the form designer.

2. In the Properties window, select the Tables property, and then click

the ellipsis button.

The Tables Collection Editor opens.

3. Click Add.

Visual Studio adds a new table called Table1 to the DataSet.

4. Change both the Name and TableName properties to dtMaster.

Microsoft ADO.Net – Step by Step 118

5. Select the Columns property, and then click the ellipsis button.

The Columns Collection Editor opens.

6. Click Add.

Visual Studio adds a column named Column1 to the DataTable.

7. Set the column’s properties to the values shown in the following table.

Property Value

AllowDbNull False

AutoIncrement True

Caption MasterID

ColumnName MasterID

DataType System.Int32

Name MasterID

Microsoft ADO.Net – Step by Step 119

8.

9. Click Add again, and then set the new column’s properties to the

values shown in the following table.

Property Value

Caption MasterValue

ColumnName MasterValue

Name MasterValue

10.

11. Click Close.

The Columns Collection Editor closes.

12. In the Tables Collection Editor, click Add to add a second table to the

DataSet.

13. Change both the Name and TableName properties to dtChild.

Microsoft ADO.Net – Step by Step 120

14. Click the Columns property, and then click the ellipsis button.

The Columns Collection Editor opens.

15. Click Add.

Visual Studio adds a column named Column1 to the DataTable.

16. Set the column’s properties to the values shown in the following table.

Property Value

AllowDbNull False

AutoIncrement True

Caption ChildID

ColumnName ChildID

DataType System.Int32

Name ChildID

17.

Microsoft ADO.Net – Step by Step 121

18. Click Add again, and then set the column’s properties to the values

shown in the following table.

Property Value

AllowDbNull False

Caption MasterLink

ColumnName MasterLink

DataType System.Int32

Name MasterLink

19.

20. Click Add again, and then set the new column’s properties to the

values shown in the following table.

Property Value

Caption ChildValue

ColumnName ChildValue

Name ChildValue

Microsoft ADO.Net – Step by Step 122

21.

22. Click Close.

The Columns Collection Editor closes.

23. Click Close on the Tables Collection Editor.

Add a DataTable to an Untyped DataSet at Run Time

Visual Basic .NET

1. In the code editor window, select btnTable in the ControlName list,

and then select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the Employees table and its columns:

3. Dim strMessage as String

4.

5. ‘Create the table

6. Dim dtEmployees As System.Data.DataTable

7. dtEmployees = Me.dsEmployees.Tables.Add(“Employees”)

8.

9. ‘Add the columns

10. dtEmployees.Columns.Add(“EmployeeID”, _

11. Type.GetType(“System.Int32”))

12. dtEmployees.Columns.Add(“FirstName”, _

13. Type.GetType(“System.String”))

14. dtEmployees.Columns.Add(“LastName”, _

15. Type.GetType(“System.String”))

16.

17. ‘Fill the DataSet

18. Me.daEmployees.Fill(Me.dsEmployees.Tables(“Emplo

yees”))

19. strMessage = “The first employee is ”

20. strMessage &= _

Microsoft ADO.Net – Step by Step 123

21.

Me.dsEmployees.Tables(“Employees”).Rows(0).Item(“LastNam

e”)

MessageBox.Show(strMessage)

22. Press F5 to run the application.

23. Click CreateTable.

The application displays a message box containing the last name of the first

employee.

24. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Create Table button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to create the Employees table and its columns:

3. string strMessage;

4.

5. // Create the table

6. System.Data.DataTable dtEmployees;

7. dtEmployees = this.dsEmployees.Ta bles.Add(“Employees”);

8.

9. //Add the columns

Microsoft ADO.Net – Step by Step 124

10. dtEmployees.Columns.Add(“EmployeeID”,

Type.GetType(“System.Int32”));

11. dtEmployees.Columns.Add(“FirstName”,

Type.GetType(“System.String”));

12. dtEmployees.Columns.Add(“LastName”,

Type.GetType(“System.String”));

13.

14. //Fill the dataset

15. this.daEmployees.Fill(this.dsEmployees.Tables[“Emplo

yees”]);

16.

17. strMessage = “The first employee is “;

18. strMessage +=

this.dsEmployees.Tables[“Employees”].Rows[0][“LastName”];

MessageBox.Show(strMessage);

19. Press F5 to run the application.

20. Click CreateTable.

The application displays a message box containing the last name of the first

employee.

21. Click OK to close the message box, and then close the application.

Microsoft ADO.Net – Step by Step 125

DataSet Relations

While the DataSet’s Tables collection defines the structure of the data stored in a

DataSet, the Relations collection defines the relationships between the DataTables. The

Relations collection contains zero or more DataRelation objects, each one representing

the relationship between two tables.

As we’ll see in the next chapter, the DataRelation object allows you to easily move

between parent and child rows—given a parent, you can find all the related children, or

given a child, you can find its parent row. DataRelation objects also provide a

mechanism for enforcing relational integrity through their ChildKeyConstraint and

ParentKeyConstraint properties.

Important Even if constraints are established in the DataRelation

object, they will be enforced only if the DataSet’s

EnforceConstraints property is True.

Add a DataRelation to an Untyped DataSet Using Visual Studio

1. Select the dsUntyped DataSet in the Component Designer.

2. In the Properties window, select the Relations property, and then click

the ellipsis button.

The Relations Collection Editor opens.

3. Click Add.

The Relation dialog box opens.

Microsoft ADO.Net – Step by Step 126

4. Change the name of the relation to MasterChild, the Key Column to

MasterID, and the Foreign Key Column to MasterLink.

5. Click OK.

Visual Studio adds the DataRelation to the DataSet.

6. Click Close.

Roadmap We’ll discuss the XML Schema Designer in Chapter 13.

The Visual Studio Relations Collection Editor is available for only Untyped DataSets. For

Typed DataSets, you can use the XML Schema Designer, which we’ll examine in

Chapter 13, or you can add DataRelations pro-grammatically. You can, of course, also

add DataRelations to Untyped DataSets at run time.

Add a DataRelation to a Dataset at Run Time

Visual Basic .NET

1. In the code editor, select btnRelation in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

Microsoft ADO.Net – Step by Step 127

2. Add the following code to create the DataRelation:

3. Dim strMessage As String

4.

5. ‘Add a new relation

6. Me.dsMaster1.Relations.Add(“CustomerOrders”, _

7. Me.dsMaster1.CustomerList.CustomerIDColumn, _

8. Me.dsMaster1.OrderTotals.CustomerIDColumn)

9.

10. strMessage = “The name of the DataRelation is ”

11. strMessage &=

Me.dsMaster1.Relations(0).RelationName.ToString

12. MessageBox.Show(strMessage)

13. Press F5 to run the application.

14. Click Create Relation.

The application adds the DataRelation, and then displays a message box

containing the name of the DataRelation.

15. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Clone DataSet button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to create the DataRelation:

Microsoft ADO.Net – Step by Step 128

3. string strMessage;

4.

5. //Add a new relation

6. this.dsMaster1.Relations.Add(“CustomerOrders”,

7. this.dsMaster1.CustomerList.CustomerIDColumn,

8. this.dsMaster1.OrderTotals.CustomerIDColumn);

9.

10. strMessage = “The name of the DataRelation is “;

11. strMessage+=

this.dsMaster1.Relations[0].RelationName.ToString();

MessageBox.Show(strMessage);

12. Press F5 to run the application.

13. Click Create Relation.

The application adds the DataRelation, and then displays a message box

containing the name of the DataRelation.

14. Click OK to close the message box, and then close the application.

DataSet Methods

The primary methods supported by the DataSet object are listed in Table6-3. Like the

DataSet’s properties, the majority of its methods are related to its interaction with XML

and will be examined in Part V.

Roadmap We’ll examine the relationship between ADO.NET and XML

in Part V.

Table 6-3: Primary DataSet Methods

Method Description

AcceptChanges Commits all

pending

changes to

the DataSet

Clear Empties all

the tables in

the DataSet

Clone Copies the

structure of

Microsoft ADO.Net – Step by Step 129

Table 6-3: Primary DataSet Methods

Method Description

a DataSet

Copy Copies the

structure

and

contents of

a DataSet

GetChanges Returns a

DataSet

containing

only the

changed

rows in each

of its tables

GetXml Returns an

XML

representati

on of the

DataSet

GetXmlSchema Returns an

XSD

representati

on of the

DataSet’s

schema

HasChanges Returns a

Boolean

value

indicating

whether the

DataSet has

pending

changes

InferXmlSchema Infers a

schema

from an

XML

TextReader

or file

Merge Combines

two

DataSets

ReadXml Reads an

XML

schema and

data into the

DataSet

ReadXmlSchema Reads an

XML

schema into

the DataSet

Microsoft ADO.Net – Step by Step 130

Table 6-3: Primary DataSet Methods

Method Description

RejectChanges Rolls back

all changes

pending in

the DataSet

Reset Returns the

DataSet to

its original

state

WriteXml Writes an

XML

schema and

data from

the DataSet

WriteXmlSchema Writes the

DataSet

structure as

an XML

schema

Roadmap We’ll examine the HasChanges, GetChanges,

AcceptChanges and RejectChanges methods in Chapter 9.

HasChanges, GetChanges, AcceptChanges, RejectChanges, and Merge are used when

updating the DataSet’s Tables collection, and we’ll examine those in the Chapter 9.

That leaves only three methods: Clear, which we’ve used extensively already; Clone,

which creates an empty copy of the DataSet; and Copy, which creates a complete copy

of the DataSet and its data.

Cloning a DataSet

The Clone method creates an exact duplicate of a DataSet, including its Tables,

Relations, and constraints.

Clone a DataSet

Visual Basic .NET

1. In the code editor, select btnClone in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to clone the record set:

3. Dim strMessage As String

4. Dim dsClone As System.Data.DataSet

5.

6. dsClone = Me.dsMaster1.Clone()

7. strMessage = “The cloned dataset has ”

8. strMessage &= dsClone.Tables.Count.ToString

9. strMessage &= ” Tables.”

10. MessageBox.Show(strMessage)

11. Press F5 to run the application.

12. Click Clone DataSet.

The application displays a message box containing the number of tables in

the new DataSet.

Microsoft ADO.Net – Step by Step 131

13. Close the application.

Visual C# .NET

1. In the form designer, double-click the Create Relation button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to clone the record set:

3. string strMessage;

4. System.Data.DataSet dsClone;

5.

6. dsClone = this.dsMaster1.Clone();

7.

8. strMessage = “The cloned dataset has “;

9. strMessage += dsClone.Tables.Count.ToString();

10. strMessage += ” tables.”;

MessageBox.Show(strMessage);

11. Press F5 to run the application.

12. Click Clone DataSet.

The application displays a message box containing the number of tables in

the new DataSet.

13. Close the application.

Microsoft ADO.Net – Step by Step 132

Copying a DataSet

Unlike the Clone method, which duplicates only the structure of a DataSet, the Copy

method copies both its structure and its data.

Copy a DataSet

Visual Basic .NET

1. In the code editor, select btnCopy in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to copy the DataSet:

3. Dim strMessage As String

4. Dim dsCopy As System.Data.DataSet

5.

6. ‘Fill the original dataset

7. Me.daCustomers.Fill(Me.dsMaster1.CustomerList)

8.

9. dsCopy = Me.dsMaster1.Copy

10. strMessage = “The copied dataset has ”

11. strMessage &= _

dsCopy.Tables(“CustomerList”).Rows.Count.ToString

strMessage &= ” rows in the CustomerList.”

12. Press F5 to run the application.

13. Click Copy DataSet.

Visual Studio displays a message box containing the number of rows in the

CustomerList table.

14. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Copy DataSet button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to copy the DataSet:

3. string strMessage;

4. System.Data.DataSet dsCopy;

5.

6. //Fill the original dataset

Microsoft ADO.Net – Step by Step 133

7. this.daCustomers.Fill(this.dsMaster1.CustomerList);

8.

9. dsCopy = this.dsMaster1.Copy();

10. strMessage = “The copied dataset has “;

11. strMessage +=

dsCopy.Tables[“CustomerList”].Rows.Count.ToString();

12. strMessage += ” rows in the CustomerList.”;

MessageBox.Show(strMessage);

13. Press F5 to run the application.

14. Click Copy DataSet.

Visual Studio displays a message box containing the number of rows in the

CustomerList table.

15. Click OK to close the message box, and then close the application.

Chapter 6 Quick Reference

To Do this

Create a

Typed

DataSet

using the

Component

Designer

Select a DataAdapter, and then choose Generate Dataset

From the Data menu.

Create an

Untyped

DataSet

using Visual

Studio

Drag a DataSet control from the Data tab of the Toolbox onto

the form.

Create an

Untyped

DataSet at

run time

Use the New method of the DataSet object:

myDs = New System.Data.DataSet()

Add a

DataTable to

an Untyped

DataSet

In the Property window for the DataSet, click the Tables

property, and then click the ellipsis button.

Microsoft ADO.Net – Step by Step 134

To Do this

using Visual

Studio

Add a

DataTable to

an Untyped

DataSet at

run time

Use the Add method of the DataTable’s Columns collection:

myTable.Columns.Add(“Name”,Type.GetType(“type”)

Add a

DataRelatio

n to an

Untyped

DataSet

using Visual

Studio

In the Properties window, click the Relations property, and then

click the ellipsis button.

Add a

DataRelatio

n to a

DataSet at

run time

Use the Add method of the DataSet’s Relations collection:

myDS.Relations.Add(“Name”, ParentCol, ChildCol)

Clone a

DataSet

Use the Clone method: newDS =myDS.Clone()

Copy a

DataSet

Use the Copy method: newDS = myDS.Copy()

Chapter 7: The DataTable

Overview

In this chapter, you’ll learn how to:

§ Create an independent DataTable at run time

§ Add a DataTable to an existing DataSet

§ Add a PrimaryKey constraint by using the FillSchema method

§ Create a calculated column in a DataTable

§ Add a new row to the Rows collection

§ Display the RowState of a DataRow

§ Add a ForeignKey constraint to a DataTable

§ Add a UniqueConstraint to a DataTable

§ Display a subset of rows within a DataTable

§ Retrieve data related to the current DataRow

We’ve been working with DataTables in the previous several chapters, but in this

chapter, we’ll take a detailed look at their structure, properties, and methods.

Understanding DataTables

Remember that we defined DataSets as an in-memory representation of relational data.

DataTables contain the actual data. They can exist as part of the DataSet’s Tables

collection or can be created independently.

As we’ll see, although the DataTable has properties of its own, it functions primarily as a

container for three collections: the Columns collection, which defines the structure of the

Microsoft ADO.Net – Step by Step 135

table; the Rows collection, which contains the data itself; and the Constraints collection,

which works in conjunction with the DataTable’s PrimaryKey property to enforce integrity

rules on the data.

Creating DataTables

In previous chapters, we used a number of techniques to create DataTables as part of a

DataSet—we used the Fill method of the DataAdapter, the Add method of the DataSet,

and the Table Collection Editor that’s part of Microsoft Visual Studio .NET. Tables can

also be created for Typed DataSets by using the XML Schema Designer in Visual

Studio, as we’ll see in Part V.

In this chapter, we’ll concentrate on creating DataTables at run time, using the DataSet’s

Add method and the DataTable’s New constructor.

Roadmap Run-time DataTables can also be created by using the

DataSet’s ReadXML, ReadXMLSchema, and

InferXmlSchema methods. We’ll examine those in Chapter

14.

Creating Independent DataTables

Although DataTa bles are most often used as part of a DataSet, they can be created

independently. You might want to create an independent DataTable to provide data for a

bound control, for example, or simply so that it can be configured before being added to

the DataSet.

The three forms of the DataTable’s New constructor are shown in Table 7-1. Of these,

only the first two are typically used in application programs.

Table 7-1: DataTable Constructors

Method Description

New() Creates a

new

DataTable

New(TableName) Creates a

new

DataTable

with the

name

specified in

the

TableName

string

New(SerializableInfo, StreamingContext) Used

internally by

the .NET

Framework

Create an Independent DataTable Object at Run Time

Visual Basic .NET

1. Open the DataTables project on the Start Page or from the Project

menu.

2. Double-click DataTables.vb in the Server Explorer.

Visual Studio opens the form designer.

Microsoft ADO.Net – Step by Step 136

3. On the form, double-click Add Table.

Visual Studio adds the Click event handler template to the code.

4. Add the following code to create a DataTable, and then set its name

to Employees:

5. Dim strMessage As String

6.

7. ‘Create the table

8. Dim dtEmployees As New System.Data.DataTable(“Employees”)

9.

10. strMessage = “The table name is ”

11. strMessage &= dtEmployees.TableName.ToString

MessageBox.Show(strMessage)

This code uses the New(tableName) version of the constructor to create a

DataTable named dtEmployees, and then displays the table name in a

message box.

12. Press F5 to run the application.

13. Click Add Table.

The application displays a message box containing the name of the table.

Microsoft ADO.Net – Step by Step 137

14. Close the application.

Visual C# .NET

1. Open the DataTables project on the Start Page or from the Project

menu.

2. Double-click DataTables.cs in the Server Explorer.

Visual Studio opens the form designer.

3. In the form designer, double-click Add Table.

Visual Studio adds the Click event handler template to the code.

4. Add the following code to create a DataTable, and then set its name

to Employees:

5. string strMessage;

6.

7. //Create the table

8. System.Data.DataTable dtEmployees;

9. dtEmployees = new System.Data.DataTable(“Employees”);

10.

Microsoft ADO.Net – Step by Step 138

11. strMessage = “The table name is “;

12. strMessage += dtEmployees.TableName.ToString();

13.

MessageBox.Show(strMessage);

This code uses the New(tableName) version of the constructor to create a

DataTable named dtEmployees, and then displays the table name in a

message box.

14. Press F5 to run the application.

15. Click Add Table.

The application displays a message box containing the name of the table.

16. Close the application.

Creating DataSet Tables

Table 7-2 shows the four methods that can be used to add a table to the DataSet’s

Tables collection. These methods are called on the Tables collection, not the DataSet

itself, for example, myDataSet.Tables.Add(), not myDataSet.Add().

Table 7-2: DataSet Add Table Methods

Method Description

Tables.Add() Creates a

new

DataTable

within the

Microsoft ADO.Net – Step by Step 139

Table 7-2: DataSet Add Table Methods

Method Description

DataSet

with the

name

TableN,

where N is a

sequential

number

Tables.Add (TableName) Creates a

new

DataTable

with the

name

specified in

the

TableName

string

Tables.Add (DataTable) Adds the

specified

DataTable

to the

DataSet

Tables.AddRange (TableArray) Adds the

DataTables

included in

the

TableArray

array to the

DataSet

The first version of the Add method creates a DataTable with the name TableN, where N

is a sequential number. Note that this behavior is different from creating an independent

DataTable without passing a table name to the constructor. In the latter case, the

TableName property will be an empty string.

We used the second version of the Add method, Add(TableName), in the previous

chapter. This version creates the new table and sets its TableName property to the string

supplied as a parameter.

You can add an independent DataTable that you’ve created at run time, or add a

DataTable that exists in another DataSet, by using the Add(DataTable) version, while the

AddRange method allows you to add an array of DataTables (again, either DataTables

that you’ve created at run time or DataTables belonging to another DataSet).

Create a DataTable Using the Tables.Add Method

Visual Basic .NET

1. In the code editor, select btnDataSet in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to add a DataTable with a default name to the

DataSet:

3. Dim strMessage As String

4.

5. ‘Create the table

6. Me.dsEmployees.Tables.Add()

7.

Microsoft ADO.Net – Step by Step 140

8. strMessage = “The table name is ”

9. strMessage &= Me.dsEmployees.Tables(0).TableName.ToString

MessageBox.Show(strMessage)

The code uses the version of the Add method that creates a new table with

the default name of TableN.

10. Press F5 to run the application.

11. Click DataSet Table.

The application displays a message box containing the name of the table.

12. Close the application.

Visual C# .NET

1. In the form designer, double-click the Dataset Table button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to add a DataTable with a default name to the

DataSet:

3. string strMessage;

4.

5. //Create the table

6. this.dsEmployees.Tables.Add();

7.

8. strMessage = “The table name is “;

9. strMessage +=

this.dsEmployees.Tables[0].TableName.ToString();

MessageBox.Show(strMessage);

The code uses the version of the Add method that creates a new table with

the default name of TableN.

10. Press F5 to run the application.

11. Click DataSet Table.

The application displays a message box containing the name of the table.

Microsoft ADO.Net – Step by Step 141

12. Close the application.

DataTable Properties

The primary properties of the DataTable are shown in Table 7-3. The most important of

these are the three collections that control the data—Columns, Rows, and Constraints.

We’ll look at each of these in detail later in this chapter.

Table 7-3: DataTable Properties

Property Description

CaseSensitive Determines

how string

comparison

s will be

performed.

ChildRelations A collection

of

DataRelatio

n objects

that have

this

DataTable

as the

Parent

table.

Columns The

collection of

DataColumn

objects

within the

DataTable.

Constraints The

collection of

constraints

maintained

by the

DataTable.

DataSet The DataSet

of which this

Microsoft ADO.Net – Step by Step 142

Table 7-3: DataTable Properties

Property Description

DataTable is

a member.

DisplayExpression An

expression

used to

represent

the table

name in the

user

interface

(UI).

HasErrors Indicates

whether

there are

errors in any

of the rows

belonging to

the

DataTable.

ParentRelations A collection

of

DataRelatio

n objects

that have

this

DataTable

as the Child

table.

PrimaryKey An array of

columns

that function

as the

primary key

of the table.

Rows The

collection of

rows

belonging to

the table.

TableName The name of

the

DataTable

in the

DataSet.

This is the

name by

which the

DataTable is

referenced

in code.

If the DataTable belongs to a DataSet, the CaseSensitive property will default to the

value of the corresponding DataSet.CaseSensitive property. Otherwise, the default value

will be False.

Microsoft ADO.Net – Step by Step 143

The ChildRelations and ParentRelations collections contain references to the

DataRelations that reference the table as a child or parent, respectively. For most

independent DataTables, these collections will be Null, but it is theoretically possible to

add a relation to the ChildRelations and ParentRelations collections if, for example, the

DataTable is related to itself.

The DisplayExpression property is similar to the Caption property of a column in that it

determines how the name of the table will be displayed to the user at run time, but unlike

the Caption property, DisplayExpression uses an expression to determine the value at

run time. One of the uses of the DataExpression property is to calculate the way the

table is displayed based on the contents of the table.

Using DataTable Properties

Most DataTable properties are set just like the properties of any other object—by a

simple assignment, or if the property is a collection, by calling the collection’s Add

method. Additionally, the structure of a DataTable based on a table in a data source can

be established using the FillSchema method of the DataAdapter. In Chapter 6, we used

FillSchema to load the entire structure of a DataTable. It can also be used to load

DataTable constraints such as the primary key.

Add a PrimaryKey Constraint Using the DataAdapter’s FillSchema Method

Visual Basic .NET

1. In the code editor, select btnSchema in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to create the table and its PrimaryKey

constraint by using FillSchema:

3. Dim strMessage As String

4.

5. Me.dsEmployees.Tables.Add(“Employees”)

6. Me.daEmployees.FillSchema(Me.dsEmployees.Tables(“Employe

es”), _

7. SchemaType.Source)

8.

9. With Me.dsEmployees.Tables(“Employees”)

10. strMessage = “Primary Key: ”

11. strMessage &=

.PrimaryKey(0).ColumnName.ToString

12. strMessage &= vbCrLf & “Constraint Count: ”

13. strMessage &=

.Constraints(0).ConstraintName.ToString

14. MessageBox.Show(strMessage)

End With

15. Press F5 to run the application.

16. Click FillSchema.

The application displays a message box showing the column of the primary

key and the number of constraints.

Microsoft ADO.Net – Step by Step 144

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the FillSchema button.

2. Visual Studio adds the Click event handler to the code window.

3. Add the following code to create the table and its PrimaryKey

constraint by using FillSchema:

4. string strMessage;

5. System.Data.DataTable dt;

6.

7. dt = this.dsEmployees.Tables.Add(“Employees”);

8.

9. this.daEmployees.FillSchema(dt,

10. SchemaType.Source);

11.

12. strMessage = “Primary Key: “;

13. strMessage += dt.PrimaryKey[0].ColumnName.ToString();

14. strMessage += “\nConstraint Count: “;

15. strMessage += dt.Constraints[0].ConstraintName.ToString();

16. MessageBox.Show(strMessage);

17. Press F5 to run the application.

18. Click FillSchema.

The application displays a message box showing the column of the primary

key and the number of constraints.

Microsoft ADO.Net – Step by Step 145

19. Close the application.

The Columns Collection

The DataTable’s Columns collection contains zero or more DataColumn objects that

define the structure of the table. If the DataTable is created by a DataAdapter’s Fill or

FillSchema method, the Columns collection will be generated automatically.

If you’re creating a DataColumn in code, you can use one of the New constructors

shown in Table 7-4.

Table 7-4: DataColumn Constructors

Method Description

New() Creates a

new

DataColumn

with no

ColumnNam

e or Caption

New(columnName) Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string

New(columnName, dataType) Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string and

the data type

specified by

the dataType

parameter

New(columnName, DataType, Expression) Creates a

new

Microsoft ADO.Net – Step by Step 146

Table 7-4: DataColumn Constructors

Method Description

DataColumn

with the

name

specified in

the

columnNam

e string and

the specified

DataType

and

Expression

New(columnName, DataType, Expression,

ColumnMapping)

Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string and

the specified

DataType,

Expression,

and

ColumnMap

ping

The primary properties of the DataColumn are shown in Table 7-5. They correspond

closely to the properties of data columns in most relational databases.

Table 7-5: DataColumn Properties

Property Description

AllowDbNull Determines

whether the

column can be

left empty

AutoIncrement Determines

whether the

system will

automatically

increment the

value of the

column

AutoIncrementSeed The starting

value for an

AutoIncrement

column

AutoIncrementStep The increment

by which an

AutoIncrement

column will be

increased. For

example, if the

AutoIncrementS

Microsoft ADO.Net – Step by Step 147

Table 7-5: DataColumn Properties

Property Description

eed is 1, and the

AutoIncrementS

tep is 3, the first

value will be 1,

the second 4,

the third 7, and

so on

Caption The name of the

column

displayed in

some controls,

such as the

DataGrid. The

default value is

the

ColumnName

ColumnName The name of the

table in the

DataSet’s

Tables

collection. This

is the name by

which the

column can be

referenced in

code

DataType The .NET

Framework data

type of the

column

DefaultValue The value of the

column provided

by the system if

no other value is

provided

Expression The expression

used to

calculate the

value of the

column

MaxLength The maximum

length of a text

column

ReadOnly Determines

whether the

value of the

column can be

changed after

the row

containing it has

been added to

the table

Microsoft ADO.Net – Step by Step 148

Table 7-5: DataColumn Properties

Property Description

Unique Determines

whether each

row in the table

must have a

unique value for

this column

Important There is an incompatibility between the .NET Framework

decimal data type and the Microsoft SQL Server decimal

data type. The .NET Framework decimal data type allows a

maximum of 28 significant digits, while the SQL Server

decimal data type allows 38 significant digits. If a

DataColumn is defined as System.Decimal and it is filled

from a SQL Server table, any rows containing more than 28

significant digits will cause an exception.

Create a Calculated Column

Visual Basic .NET

1. Select btnCalculate in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code, which first adds an Employees table to the

dsEmployees DataSet and then uses the daEmployees

DataAdapter to create the pre-existing columns and fill them with

data:

3. Dim dcName As System.Data.DataColumn

4.

5. ‘Create the table

6. Me.dsEmployees.Tables.Add(“Employees”)

7.

8. ‘Fill the table from daEmployees

Me.daEmployees.Fill(Me.dsEmployees.Tables(0))

9. Add the following code to create the column and then add it to the

table:

10. ‘Create the column

11. dcName = New System.Data.DataColumn(“Name”)

12. dcName.DataType = System.Type.GetType(“System.String”)

13. dcName.Expression = “FirstName + ‘ ‘ + LastName”

14.

15. ‘Add the calculated column

16. Me.dsEmployees.Tables(“Employees”).Columns.Add(dcName)

17. Add the following code to bind the lbEmployees list box to the

calculated column so that we can see the results:

Important Make sure that you choose the lbEmployees list box, not the

lblEmployees label.

18. ‘Bind to the listbox

19. Me.lbEmployees.DataSource =

Me.dsEmployees.Tables(“Employees”)

20. Me.lbEmployees.DisplayMember = “Name”

Microsoft ADO.Net – Step by Step 149

21. Press F5 to run the application.

22. Click Calculate.

The application displays the full name of the employees in the list box.

23. Close the application.

Visual C# .NET

1. In the form designer, double-click the Calculate button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure, which first adds an Employees table to

the dsEmployees DataSet, and then uses the daEmployees

DataAdapter to create the pre-existing columns and fill them with

data:

3. System.Data.DataColumn dcName;

4.

5. //Create the table

6. this.dsEmployees.Tables.Add(“Employees”);

7.

8. //Fill the data from the dataset

this.daEmployees.Fill(this.dsEmployees.Tables[0]);

9. Add the following code to create the column and then add it to the

table:

10. //Create the column

11. dcName = new System.Data.DataColumn(“Name”);

12. dcName.DataType = System.Type.GetType(“System.String”);

13. dcName.Expression = “FirstName + ‘ ‘ + LastName”;

14.

15. //Add the calculated column

this.dsEmployees.Tables[“Employees”].Columns.Add(dcName);

16. Add the following code to bind the lbEmployees list box to the

calculated column so that we can see the results:

Important Make sure that you choose the lbEmployees list box, not the

lblEmployees label.

17. //Bind to the listbox

18. this.lbEmployees.DataSource =

this.dsEmployees.Tables[“Employees”];

Microsoft ADO.Net – Step by Step 150

19. this.lbEmployees.DisplayMember = “Name”;

20. Press F5 to run the application.

21. Click Calculate.

The application displays the full name of the employees in the list box.

22. Close the application.

Rows

As we’ve seen previously, the DataTable’s Rows collection contains the actual data that

is contained in the DataTable, in the form of zero or more DataRow objects. The

structure of the DataRow is shown in Table 7-6.

Table 7-6: DataRow Properties

Property Description

HasErrors Indicates

whether

there are

any errors in

the row

Item The value of

a column in

the

DataRow

ItemArray The value of

all columns

in the

DataRow

represented

as an array

RowError The custom

error

description

for a row

RowState The

DataRowSta

te of a row

Table The

DataTable

to which the

Microsoft ADO.Net – Step by Step 151

Table 7-6: DataRow Properties

Property Description

DataRow

belongs

Because the Rows property is a collection, you can add new data to the DataTable by

using the Add method, which is available in two forms, as shown in Table 7-7.

Table 7-7: Rows.Add Methods

Method Description

Add(DataRow) Adds the

specified

DataRow to

the table

Add(dataValues()) Creates a

new

DataRow in

the table

and sets its

Item values

as specified

in the

dataValues

object array

Add a New Row to the Rows Collection

Visual Basic .NET

1. Select btnAddRow in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create a new DataRow, and add it to the

Customers table:

3. Dim drNew As System.Data.DataRow

4.

5. ‘Create the new row

6. drNew = Me.dsMaster1.CustomerList.NewRow

7. drNew.Item(“CustomerID”) = “ANEWR”

8. drNew.Item(“CompanyName”) = “A New Row”

9.

10. ‘Add row to table

11. Me.dsMaster1.CustomerList.Rows.Add(drNew)

12.

13. ‘Refresh the display

Me.lbClients.Refresh()

14. Press F5 to run the application.

15. Click Add DataRow.

The application adds the new row to the table.

16. Scroll to the bottom of the Clients list box to confirm the addition.

Microsoft ADO.Net – Step by Step 152

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the Add DataRow button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to create a new DataRow, and add it to

the Customers table:

3. System.Data.DataRow drNew;

4.

5. //Create the new row

6. drNew = this.dsMaster1.CustomerList.NewRow();

7. drNew[“CustomerID”] = “ANEWR”;

8. drNew[“CompanyName”] = “A New Row”;

9.

10. //Add row to table

11. this.dsMaster1.CustomerList.Rows.Add(drNew);

12.

13. //Refresh the display

14. this.lbClients.Refresh();

15. Press F5 to run the application.

16. Click Add DataRow.

The application adds the new row to the table.

17. Scroll to the bottom of the Clients list box to confirm the addition.

Microsoft ADO.Net – Step by Step 153

18. Close the application.

The RowState property of the DataRow reflects the actions that have been taken since

the DataTable was created or since the last time the AcceptChanges method was called.

The possible values for the RowState property are shown in Table 7-8.

Table 7-8: DataRowState Values

Property Description

Added The

DataRow is

new.

Deleted The

DataRow

has been

deleted from

the table.

Detached The

DataRow

has not yet

been added

to a table.

Modified The

contents of

the

DataRow

have been

changed.

Unchanged The

DataRow

has not

been

modified.

Display the Row State

Visual Basic .NET

1. Select btnVersion in the ControlName list, and then select Click in the

MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to edit a row and display its properties:

Microsoft ADO.Net – Step by Step 154

3. Dim strMessage As String

4.

5. With Me.dsMaster1.CustomerList.Rows(0)

6. .Item(“CustomerID”) = “NEWVAL”

7. strMessage = “The RowState is ” & .RowState.ToString

8. strMessage &= vbCrLf & “The original value was ”

9. strMessage &= .Item(“CustomerID”, DataRowVersion.Original)

10. strMessage &= vbCrLf & “The new value is ”

11. strMessage &= .Item(“CustomerID”,

DataRowVersion.Current)

End With

MessageBox.Show(strMessage)

12. Press F5 to run the application.

13. Click Row Version.

The application displays a message box indicating the changes.

14. Close the application.

Visual C# .NET

1. In the form designer, double-click the Row Version button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to edit a row and display its properties:

3. string strMessage;

4. System.Data.DataRow dr;

5.

6. dr = this.dsMaster1.CustomerList.Rows[0];

7. dr[“CustomerID”] = “NEWVAL”;

8.

9. strMessage = “The RowState is ” + dr.RowState.ToString();

10. strMessage += “\nThe original value was “;

11. strMessage += dr[“CustomerID”, DataRowVersion.Original];

12. strMessage += “\nThe new value is “;

13. strMessage += dr[“CustomerID”, DataRowVersion.Current];

14.

Microsoft ADO.Net – Step by Step 155

MessageBox.Show(strMessage);

15. Press F5 to run the application.

16. Click Row Version.

The application displays a message box indicating the changes.

17. Close the application.

Constraints

Along with the DataTable’s PrimaryKey property, the Constraints collection is used to

maintain the integrity of the data within a DataTable. The System.Data.Constraint object

has only the two properties, which are shown in Table 7-9.

Table 7-9: Constraint Properties

Property Description

ConstraintName The name of

the

constraint.

This

property is

used to

reference

the

Constraint in

code.

Table The

DataTable

to which the

constraint

belongs.

Obviously, an object that has only a name and a container is of little use when it comes

to enforcing integrity. In real applications, you will instantiate one of the objects that

inherits from Constraint, ForeignKeyConstraint, or UniqueConstraint.

The properties of the ForeignKeyConstraint object are shown in Table 7-10. This

constraint represents the rules that are enforced when a parent-child relationship exists

between tables (or between rows within a single table).

Table 7-10: ForeignKeyConstraint Properties

Property Description

AcceptRejectRule Determines

the action

Microsoft ADO.Net – Step by Step 156

Table 7-10: ForeignKeyConstraint Properties

Property Description

that should

take place

when the

AcceptChan

ges method

is called

Columns The

collection of

child

columns for

the

constraint

DeleteRule The action

that will take

place when

the row is

deleted

RelatedColumns The

collection of

parent

columns for

the

constraint

RelatedTable The parent

DataTable

for the

constraint

Table Overrides

the

Constraint.T

able property

to return the

child

DataTable

for the

constraint

UpdateRule The action

that will take

place when

the row is

updated

The actions to take place to enforce integrity are maintained by three properties of the

ForeignKeyConstraint: AcceptRejectRule, DeleteRule, and UpdateRule.

The possible values of the AcceptRejectRule property are Cascade or None. The

DeleteRule and UpdateRule properties can be set to any of the values shown in Table 7-

11. Both properties have a default value of Cascade.

Table 7-11: Action Rules

Property Description

Cascade Delete or

update the

Microsoft ADO.Net – Step by Step 157

Table 7-11: Action Rules

Property Description

related rows

None Take no

action on

the related

rules

SetDefault Set values

in the

related rows

to their

default

values

SetNull Set values

in the

related rows

to Null

Add a ForeignKeyConstraint

Visual Basic .NET

1. In the code editor, select btnForeign in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the ForeignKeyConstraint:

3. Dim strMessage As String

4. Dim fkNew As System.Data.ForeignKeyConstraint

5.

6. With Me.dsUntyped

7. fkNew = New System.Data.ForeignKeyConstraint(“NewFK”, _

8. .Tables(“dtMaster”).Columns(“MasterID”), _

9. .Tables(“dtChild”).Columns(“MasterLink”))

10. .Tables(“dtChild”).Constraints.Add(fkNew)

11.

12. strMessage = “The new constraint is called ”

13. strMessage &=

.Tables(“dtChild”).Constraints(0).ConstraintName.ToString

14. End With

15.

MessageBox.Show(strMessage)

16. Press F5 to run the application.

17. Click Foreign Key.

The application adds the ForeignKeyConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 158

18. Close the application.

Visual C# .NET

1. In the form designer, double-click the Foreign Key button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create the ForeignKeyConstraint:

3. string strMessage;

4. System.Data.ForeignKeyConstraint fkNew;

5. System.Data.DataSet ds = this.dsUntyped;

6.

7. fkNew = new System.Data.ForeignKeyConstraint(“NewFK”,

8. ds.Tables[“dtMaster”].Columns[“MasterID”],

9. ds.Tables[“dtChild”].Columns[“MasterLink”]);

10. ds.Tables[“dtChild”].Constraints.Add(fkNew);

11.

12. strMessage = “The new constraint is called “;

13. strMessage +=

14. ds.Tables[“dtChild”].Constraints[0].ConstraintName.ToString();

MessageBox.Show(strMessage);

15. Press F5 to run the application.

16. Click Foreign Key.

The application adds the ForeignKeyConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 159

17. Close the application.

The UniqueConstraint ensures that the column or columns specified in its Columns

property are unique within the table. Its structure is much simpler than a

ForeignKeyConstraint, as shown in Table 7-12.

Table 7-12: UniqueConstraint Properties

Property Description

Columns The array of

columns

affected by

the

constraint

IsPrimaryKey Indicates

whether the

constraint is

on the

primary key

Add a UniqueConstraint

Visual Basic .NET

1. In the code editor, select btnUnique in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the UniqueConstraint:

3. Dim strMessage As String

4. Dim ucNew As System.Data.UniqueConstraint

5.

6. With Me.dsUntyped.Tables(“dtMaster”)

7. ucNew = New System.Data.UniqueConstraint(“NewUnique”, _

8. .Columns(“MasterValue”))

9. .Constraints.Add(ucNew)

10.

11. strMessage = “The new constraint is called ”

12. strMessage &=

.Constraints(“NewUnique”).ConstraintName.ToString

Microsoft ADO.Net – Step by Step 160

13. End With

14.

MessageBox.Show(strMessage)

15. Press F5 to run the application.

16. Click Unique.

The application adds the UniqueConstraint and displays its name in a

message box.

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the Unique button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create the UniqueConstraint:

3. string strMessage;

4. System.Data.UniqueConstraint ucNew;

5. System.Data.DataTable dt = this.dsUntyped.Tables[“dtMaster”];

6.

7. ucNew = new System.Data.UniqueConstraint(“NewUnique”,

8. dt.Columns[“MasterValue”]);

9. dt.Constraints.Add(ucNew);

10.

11. strMessage = “The new constraint is called “;

12. strMessage +=

dt.Constraints[“NewUnique”].ConstraintName.ToString();

MessageBox.Show(strMessage);

13. Press F5 to run the application.

14. Click Unique.

The application adds the UniqueConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 161

15. Close the application.

DataTable Methods

The methods supported by the DataTable are shown in Table 7-13. We’ve already used

some of these, such as the Clear method, in previous exercises. We’ll examine most of

the others in Chapter 9.

Table 7-13: DataTable Methods

Method Description

AcceptChanges Commits the

pending

changes to

all

DataRows

BeginLoadData Turns off

notifications,

index

maintenanc

e, and

constraint

enforcement

while a bulk

data load is

being

performed.

Used in

conjunction

with the

LoadDataRo

w and

EndLoadDat

a methods

Clear Removes all

DataRows

from the

DataTable

Clone Copies the

structure of

a DataTable

Microsoft ADO.Net – Step by Step 162

Table 7-13: DataTable Methods

Method Description

Compute Performs an

aggregate

operation on

the

DataTable

Copy Copies the

structure

and data of

a DataTable

EndLoadData Reinstates

notifications,

index

maintenanc

e, and

constraint

enforcement

after a bulk

data load

has been

performed

ImportRow Copies a

DataRow,

including all

row values

and the row

state, into a

DataTable

LoadDataRow Used during

bulk

updating of

a DataTable

to update or

add a new

DataRow

NewRow Creates a

new

DataRow

that

matches the

DataTable

schema

RejectChanges Rolls back

all pending

changes on

the

DataTable

Select Gets an

array of

DataRow

objects

Microsoft ADO.Net – Step by Step 163

The Select Method

The Select method is used to filter and sort the rows of a DataTable at run time. The

Select method doesn’t affect the contents of the table. Instead, the method returns an

array of DataRows that match the criteria you specify.

Note The DataView, which we’ll examine in the following chapter, also

allows you to filter and sort data rows.

Use the Select Method to Display a Subset of Rows

Visual Basic .NET

1. In the code editor, select btnSelect in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to select only those Customers whose

CustomerID begins with A, and rebind the lbCustomers list box to

the array of selected rows:

3. Dim drFound() As System.Data.DataRow

4. Dim dr As System.Data.DataRow

5.

6. drFound = Me.dsMaster1.CustomerList.Select(“CustomerID

LIKE” _ & ” ‘A*'”)

7.

8. Me.lbClients.DataSource = Nothing

9. Me.lbClients.Items.Clear()

10.

11. For Each dr In drFound

12. Me.lbClients.Items.Add(dr(“CompanyName”))

13. Next

14.

Me.lbClients.Refresh()

15. Press F5 to run the application.

16. Click Select.

The application displays a subset of rows in the lbCustomers list box.

17. Close the application.

Microsoft ADO.Net – Step by Step 164

Visual C# .NET

1. In the form designer, double-click the Select button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to select only those Customers whose

CustomerID begins with A, and rebind the lbCustomers list box to

the array of selected rows:

3. System.Data.DataRow[] drFound;

4.

5. drFound = this.dsMaster1.CustomerList.Select(“CustomerID

LIKE” + ” ‘A*'”);

6.

7. this.lbClients.DataSource = null;

8. this.lbClients.Items.Clear();

9.

10. foreach (System.Data.DataRow dr in drFound)

11. {

12. this.lbClients.Items.Add(dr[“CompanyName”]);

13. }

14.

this.lbClients.Refresh();

15. Press F5 to run the application.

16. Click Select.

The application displays a subset of rows in the lbCustomers list box.

17. Close the application.

DataRow Methods

The methods supported by the DataRow object are shown in Table 7-14. The majority of

the methods are used when editing data and we’ll look at them in detail in Chapter 9.

Table 7-14: DataRow Methods

Method Description

AcceptChanges Commits all

pending

changes to

a DataRow

Microsoft ADO.Net – Step by Step 165

Table 7-14: DataRow Methods

Method Description

BeginEdit Begins an

edit

operation

CancelEdit Cancels an

edit

operation

Delete Deletes the

row

End Edit Ends an edit

operation

GetChildRows Gets all the

child rows of

a DataRow

GetParentRow Gets the

parent row

of a

DataRow

based on

the specified

DataRelatio

n

GetParentRows Gets the

parent rows

of a

DataRow

based on

the specified

DataRelatio

n

HasVersion Indicates

whether a

specified

version of a

DataRow

exists

IsNull Indicates

whether the

specified

Column is

Null

RejectChanges Rolls back

all pending

changes to

the

DataRow

SetParentRow Sets the

parent row

of a

DataRow

Microsoft ADO.Net – Step by Step 166

The GetChildRows and GetParentRows methods of the DataRow are used to navigate

the relationships you set up using the DataSet’s Relations collection. Both methods are

overloaded, allowing you to pass either a DataRelation or a string representing the name

of the DataRelation, and, optionally, a RowState value.

Use the GetChildRows Method to Retrieve Data

Visual Basic .NET

1. In the code editor, select lbClients in the ControlName list, and then

select SelectedIndexChanged in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create a relation in dsMaster1, retrieve the

child rows of the current list box selection, and then display them in

the dgOrders data grid:

3. Dim drCurrent As System.Data.DataRow

4. Dim dsCustOrders As New System.Data.DataSet()

5. Dim drCustOrders() As System.Data.DataRow

6.

7. ‘Create the relation if necessary

8. If Me.dsMaster1.Relations.Count = 0 Then

9. Me.dsMaster1.Relations.Add(“CustomerOrders”, _

10. Me.dsMaster1.CustomerList.CustomerIDColumn, _

11. Me.dsMaster1.OrderTotals.CustomerIDColumn)

12. End If

13.

14. drCurrent = Me.lbClients.SelectedItem.Row

15. dsCustOrders.Merge(drCurrent.GetChildRows(“CustomerOrders”

))

16.

17. Me.dgOrders.SetDataBinding(dsCustOrders, “OrderTotals”)

Me.dgOrders.Refresh()

18. Press F5 to run the application.

19. Select different items in the Clients list.

The application displays the Client’s rows in the Orders data grid.

20. Close the application.

Microsoft ADO.Net – Step by Step 167

Visual C# .NET

1. In the form designer, double-click the Select button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create a relation in dsMaster1, retrieve the

child rows of the current list box selection, and then display them in

the dgOrders data grid:

3. System.Data.DataRowView drCurrent;

4. System.Data.DataSet dsCustOrders;

5.

6. dsCustOrders = new System.Data.DataSet();

7. //Create the relation if necessary

8. if (this.dsMaster1.Relations.Count == 0)

9. {

10. this.dsMaster1.Relations.Add(“CustomerOrders”,

11.

this.dsMaster1.CustomerList.CustomerIDColumn,

12. this.dsMaster1.OrderTotals.CustomerIDColumn);

13. }

14.

15. drCurrent = (System.Data.DataRowView)

this.lbClients.SelectedItem;

16.

17. dsCustOrders.Merge(drCurrent.Row.GetChildRows(“C

ustomerOrders”));

18.

19. this.dgOrders.SetDataBinding(dsCustOrders,

“OrderTotals”);

this.dgOrders.Refresh();

20. Press F5 to run the application.

21. Select different items in the Clients list.

The application displays the Client’s rows in the Orders data grid.

22. Close the application.

Microsoft ADO.Net – Step by Step 168

DataTable Events

The events supported by the DataTable are shown in Table 7-15. All of the events are

used as part of data validation, and we’ll examine them in more detail in Chapter 10.

Table 7-15: DataTable Events

Event Description

ColumnChanged Raised after a DataRow item has been changed

ColumnChanging Raised before a DataRow item has been changed

RowChanged Called after a DataRow has been changed

RowChanging Called before a DataRow has been changed

RowDeleted Called after a DataRow has been deleted

RowDeleting Called before a DataRow is deleted

Chapter 7 Quick Reference

To Do this

Create an

independent

DataTable at run time

Use the New method:

myTable = New.System.Data.DataTable()

Add a DataTable to

an existing DataSet

Use the Add method of the DataSet’s Tables

collection:

myDataSet.Tables.Add(TableName)

Add a PrimaryKey

constraint based on a

table in the data

source

Use the DataAdapter’s FillSchema method:

myDA.FillSchema(mytable.

SchemaType.Source)

Create a calculated

column

Set the Expression property of the column:

MyColumn.Expression = “New ” & “Value”

Add a new DataRow Create the DataRow by using the NewRow method,

and then add it to the DataTable:

myRow = myTable.NewRow

myTable.Rows.Add(myRow)

Display a subset of

rows

Use the DataTable’s Select method:

DataRowArray =

myTable.Select(“Criteria”)

Retrieve data related

to the current

DataRow

Use the GetChildRows method:

myRow.GetChildRows(“RelationName”)

Chapter 8: The DataView

Overview

In this chapter, you’ll learn how to:

§Add a DataView to a form

§ Create a DataView at run time

§ Create calculated columns in a DataView

§ Sort DataView rows

§ Filter DataView rows

Microsoft ADO.Net – Step by Step 169

§ Search a DataView based on a primary key value

In the previous chapter, we looked at the Select method of the DataTable, which

provides a mechanism for filtering and sorting DataRows. The DataView provides

another mechanism for performing the same actions. Unlike the Select method, a

DataView is a separate object that sits on top of a DataTable.

Understanding DataViews

A DataView provides a filtered and sorted view of a single DataTable. Al-though the

DataView provides the same functionality as the DataTable’s Select method, it has a

number of advantages. Because they are distinct objects, DataViews can be created and

configured at both design time and run time, making them easier to implement in many

situations.

Furthermore, unlike the array of DataRows returned from a Select method, DataViews

can be used as the data source for bound controls. (Remember that in the previous

chapter we had to load the DataRow array returned by the Select method into a DataSet

before we could display its contents in the data grid.)

You can create multiple DataViews for any given DataTable. In fact, every DataTable

contains at least one DataView in its DefaultDataView property. The properties of the

DefaultDataView can be set at run time, but not at design time.

The rows of a DataView, although very much like DataRows, are actually DataRowView

objects that reference DataRows. The DataRowView properties are shown in Table 8-1.

Only the Item property is also exposed by the DataRow; the other properties are unique.

Table 8-1: DataRowView Properties

Property Description

DataView The DataView to which this DataRowView belongs

IsEdit Indicates whether the DataRowView is currently being

edited

IsNew Indicates whether the DataRowView is new

Item The value of a column in the DataRowView

Row The DataRow that is being viewed

RowVersion The current version of the DataRowView

DataViewManagers

Functionally, a DataViewManager is similar to a DataSet. Just as a DataSet acts as a

container for DataTables, the DataViewManager acts as a container for DataViews,

one for each DataTable in a Dat aSet.

The DataViews within the DataViewManager are accessed through the

DataViewSettings collection of the DataViewManager. It’s convenient to think of a

DataViewSetting existing for each DataTable in a DataSet. In reality, the

DataViewSetting isn’t physically created until (and unless) it is referenced in code.

DataViewManagers are most often used when the DataSet contains related tables

because they allow you to persist sorting and filtering criteria across calls to

GetChildRows . If you were to use individual DataViews on the child table, the sorting

and filtering criteria would need to be reset after each call. With a DataViewManager,

after the criteria have been established, the rows returned by GetChildRows will be

sorted and filtered automatically.

Microsoft ADO.Net – Step by Step 170

In Chapter 7, we saw that the DataSet has a DefaultViewManager property. In reality,

you’re actually binding to the default DataViewManager when you bind a control to a

DataSet. Under most circumstances, you can ignore this technicality, but it can be

useful for setting default sorting and filtering criteria.

Note, however, that the DataSet’s DefaultViewManager property is read-only—you can

set its properties, but you cannot create a new DataViewManager and assign it to the

DataSet as the default DataViewManager.

Creating DataViews

Because DataViews are independent objects, you can create and configure them at

design time using Microsoft Visual Studio. You can, of course, also create and configure

DataViews at run time in code.

Using Visual Studio

Visual Studio supports the design-time creation of DataViews through the DataView

control on the Data tab of the Toolbox. Like any other control with design-time support,

you simply drag the control onto a form and set its properties in the Properties window.

Create and Bind a DataView Using Visual Studio

1. Open the DataViews project from the Start menu or the Project menu.

2. Double-click DataViews.vb (or DataViews.cs if you’re using C#) in the

Solution Explorer.

Visual Studio .NET opens the form designer.

3. Drag a DataView control from the Data tab of the Toolbox to the form.

Visual Studio adds the control to the component designer.

4. In the Properties window, change the DataView’s name to dvOrders.

5. Change the Table property to dsMaster1.OrderTotals, and then

change the Sort property to OrderID.

Microsoft ADO.Net – Step by Step 171

6. Select the dgOrders data grid, and then change the DataSource

property to dvOrders.

7. Press F5 to run the application.

Visual Studio displays the information in the Orders data grid arranged

according to the values in the OrderID column.

Microsoft ADO.Net – Step by Step 172

8. Close the application.

Creating DataViews at Run Time

Like most of the objects in the .NET Framework Class Library, the DataView supports a

New constructor, which allows the DataView to be created in code at run time. The

DataView supports the two versions of the New constructor, which are shown in Table 8-

2.

Table 8-2: DataView Constructors

Method Description

New() Creates a

new

DataView

New(DataTable) Creates a

new

DataView

and sets its

Table

property to

the specified

DataTable

Create a DataView at Run Time

Visual Basic .NET

1. Double-click Create.

Visual Studio opens the code editor and adds the Click event handler

template.

2. Add the following code to the method:

3. Dim drCurrent As System.Data.DataRow

4. Dim dvNew As New System.Data.DataView()

5.

6. ‘retrieve the selected row in lbOrders

7. drCurrent = Me.lbClients.SelectedItem.Row

8.

9. ‘configure the dataview

10. dvNew.Table = Me.dsMaster1.OrderTotals

Microsoft ADO.Net – Step by Step 173

11. dvNew.RowFilter = “CustomerID = ‘” & drCurrent(0) & “‘”

12.

13. ‘rebind the datagrid

Me.dgOrders.DataSource = dvNew

The code first declares a DataRow variable that will contain the item selected

in the lbClients list box, and then creates a new DataView using the default

constructor. Next drCurrent is assigned to the current selection in the list box.

The Table property of the dvNew DataView is set to the OrderTotals table,

and the RowFilter property is set to show only the orders for the selected

client. Finally the dgOrders data grid is bound to the new DataView.

14. Press F5 to run the application, click in the Clients list box, and then

click Create.

The data grid displays the orders for only the selected client.

15. Close the application.

Visual C# .NET

1. Double-click Create.

Visual Studio opens the code editor and adds the Click event handler

template and the Click event delegate.

2. Add the following code to the method:

3. System.Data.DataRowView drCurrent;

4. System.Data.DataView dvNew;

5. dvNew = new System.Data.DataView();

6.

7. //retrieve the selected row in lbOrders

8. drCurrent =

(System.Data.DataRowView)this.lbClients.SelectedItem;

9.

10. //configure the dataview

11. dvNew.Table = this.dsMaster1.OrderTotals;

12. dvNew.RowFilter = “CustomerID = ‘” + drCurrent[0] +

“‘”;

13.

14. //rebind the datagrid

this.dgOrders.DataSource = dvNew;

Microsoft ADO.Net – Step by Step 174

The code first declares a DataRowView variable that will contain the item

selected in the lbClients list, and then creates a new DataView using the

default constructor. Next drCurrent is assigned to the current selection in the

list.

The Table property of the dvNew DataView is set to the OrderTotals table,

and the RowFilter property is set to show only the orders for the selected

client. Finally the dgOrders data grid is bound to the new DataView.

15. Press F5 to run the application, click in the Clients list, and then

click Create.

The data grid displays the orders for only the selected client.

16. Close the application.

DataView Properties

The properties exposed by the DataView object are shown in Table 8-3. The

AllowDelete, AllowEdit, and AllowNew properties determine whether the data reflected

by the DataView can be changed through the DataView. (Data can always be changed

by referencing the row in the underlying DataTable.)

Table 8-3: DataView Properties

Property Description

AllowDelete Determines

whether rows in

the DataView

can be deleted

AllowEdit Determines

whether rows in

the DataView

can be changed

AllowNew Determines

whether rows

can be added to

the DataView

Apply Determines

whether the

default sort

order,

determined by

Microsoft ADO.Net – Step by Step 175

Table 8-3: DataView Properties

Property Description

DefaultSort the underlying

data source, will

be used

Count The number of

DataRowViews

in the DataView

DataViewManager The

DataViewMana

ger to which

this DataView

belongs

Item(Index) The

DataRowView

at the specified

Index in the

DataView

RowFilter The expression

used to filter the

rows contained

in the DataView

RowStateFilter The

DataViewRowS

tate used to

filter the rows

contained in the

DataView

Sort The expression

used to sort the

rows contained

in the DataView

Table The DataTable

that is the

source of rows

for the

DataView

The Count property does exactly what one might expect—it returns the number of

DataRows reflected in the DataView, while the DataViewManager and Table properties

serve to connect the DataView to other objects within an application.

Finally the RowFilter, RowStateFilter, and Sort properties control the DataRows that are

reflected in the DataView and how those rows are ordered. We’ll examine each of these

properties later in this chapter.

DataColumn Expressions

Expressions, technically DataColumn Expressions, are used by the RowFilter and Sort

properties of the DataView. We’ve used DataColumn Expressions in previous chapters

when we created a calculated column in a DataTable and when we set the sort and filter

expressions for the DataTable Select method. Now it’s time to examine them more

closely.

Microsoft ADO.Net – Step by Step 176

A DataColumn Expression is a string, and you can use all the normal string handling

functions to build one. For example, you can use the & concatena-tion operator to join

two strings into a single Expression:

myExpression = “CustomerID = ‘” & strCustID & “‘”

Note that the value of strCustID will be surrounded by single quotation marks in the

resulting text. In building DataColumn Expressions, columns may be referred to directly

by using the ColumnName property, but any actual text values must be quoted.

In addition, certain special characters must be “escaped,” that is, wrapped in square

brackets. For example, if you had a column named Miles/Gallon, you would have to

surround the column name with brackets:

MyExpression = “[Miles/Gallon] > 10”

Tip You can find the complete list of special characters in the online

Help for the DataColumn.Expression property.

Numeric values in DataColumn Expressions require no special handling, as shown in the

previous example, but date values must be surrounded by hash marks:

MyExpression = “OrderDate > #01/01/2001#”

Important Dates in code must conform to US usage, that is,

month/day/year.

As we’ve seen, DataRow columns are referred to by the ColumnName prop-erty. You

can reference a column in a Child DataRow by adding “Child” before the ColumnName in

the Child row:

MyExpression = “Child.OrderTotal > 3000”

The syntax for referencing a Parent row is identical:

MyExpression = “Parent.CustomerID = ‘AFLKI'”

Parent and Child references are frequently used along with one of the aggre-gate

functions shown in Table 8-4. The aggregate functions can also be used directly, without

reference to Child or Parent rows.

Table 8-4: Aggregate Functions

Function Result

Sum Sum

Avg Average

Min Minimum

Max Maximum

Count Count

StDev Statistical

standard

deviation

Var Statistical

variance

When setting the expressions for DataViews, you will frequently be comparing values.

The .NET Framework handles the usual range of operators, as shown in Table 8-5.

Table 8-5: Comparison Operators

Operator Action

AND Logical

AND

Microsoft ADO.Net – Step by Step 177

Table 8-5: Comparison Operators

Operator Action

OR Logical OR

NOT Logical

NOT

< Less than

> Greater

than

<= Less than

or equal to

>= Greater

than or

equal to

<> Not equal

IN Determines

whether

the value

specified is

contained

in a set

LIKE Inexact

match

using a

wildcard

character

The IN operator requires that the set of values to be searched be separated by commas

and surrounded by parentheses:

MyExpression = “myColumn IN (‘A’,’B’,’C’)

The LIKE operator treats the characters * or % as interchangeable wildcards—both

replace zero or more characters. The wildcard characters can be used at the beginning

or end of a string, or at both ends, but cannot be contained within a string.

DataColumn Expressions also support the arithmetic operators shown in Table 8-6.

Table 8-6: Arithmetic Operators

Operator Action

+ Addition

– Subtraction

* Multiplication

/ Division

% Modulus

(integer

division)

The arithmetic + operator is also used for string concatenation within a DataColumn

Expression rather than the more usual & operator.

Finally DataColumn Expressions support a number of special functions, as shown in

Table 8-7.

Microsoft ADO.Net – Step by Step 178

Table 8-7: Special Functions

Function Result

Convert(Expression, Type) Converts the

value returned

by Expression

to the specified

.NET

Framework

Type

Len(String) The number of

characters in

the String

ISNULL(Expression, ReplacementValue) Determines

whether

Expression

evaluates to

Null, and if so,

it returns

ReplacementV

alue

IF(Expression, ValueIfTrue, ValueIfFalse) Returns

ValueIfTrue if

Expression

evaluates to

True; otherwise

returns

ValueIfFalse

SUBSTRING(Expression, Start, Length) Returns Length

characters of

the string

returned by

Expression,

beginning at

the zero-based

position

specified by

Start

Sort Expressions

Although the DataColumn Expressions used in the Sort property can be arbitrarily

complex, in most cases they will take the form of one or more ColumnNames separated

by commas:

myDataView.Sort = “CustomerID, OrderID”

Optionally, the ColumnNames may be followed by ASC or DESC to cause the values to

be sorted in ascending or descending order, respectively. The default sort is ascending,

so the ASC keyword isn’t strictly necessary, but it can sometimes be useful to include it

for clarity.

Change the Sorting Method

Visual Basic .NET

1. In the code editor, select btnSort in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to the method:

Microsoft ADO.Net – Step by Step 179

3. ‘Change the sort order

4. Me.dvOrders.Sort = “EmployeeID, CustomerID, OrderID DESC”

5.

6. ‘Refresh the datagrid

Me.dgOrders.Refresh()

The code sets the sort order of the dvOrders DataView to sort first by

EmployeeID, then by CustomerID, and finally by OrderID in descending order.

7. Press F5 to run the application.

8. Click Sort.

The application displays the sorted contents of the data grid.

9. Close the application.

Visual C# .NET

1. In the form designer, double-click the Create button.

Visual Studio adds the Click event handler to the code window.

2. In the code editor, add a Click event handler for the btnSort button

after the btnCreate_Click event handler that we created in the

previous exercise:

3. private void btnSort_Click (object sender, System.EventArgs e)

4. {

5.

}

6. Add the following code to the method:

7. //Change the sort order

8. this.dvOrders.Sort = “EmployeeID, CustomerID, OrderID DESC”;

9.

10. //Refresh the datagrid

this.dgOrders.Refresh();

The code sets the sort order of the dvOrders DataView to sort first by

EmployeeID, then by CustomerID, and finally by OrderID in descending order.

11. Press F5 to run the application.

12. Click Sort.

The application displays the sorted contents of the data grid.

Microsoft ADO.Net – Step by Step 180

13. Close the application.

RowStateFilter

In the previous chapter, we saw that each DataRow maintains its status in its RowState

property. The DataView’s RowStateFilter property can be used to limit the

DataRowViews within the DataView to those with a certain RowState or to return values

of a given state. The possible values for the RowStateFilter property are shown in Table

8-8.

Table 8-8: DataViewRowState Values

Member Name Description

Added Only those

rows that

have been

added

CurrentRows All current

row values

Deleted Only those

rows that

have been

deleted

ModifiedCurrent Current row

values for

rows that

have been

modified

ModifiedOriginal Original

values of

rows that

have been

modified

None No rows

OriginalRows Original

values of all

rows

Unchanged Only those

rows that

Microsoft ADO.Net – Step by Step 181

Table 8-8: DataViewRowState Values

Member Name Description

haven’t

been

modified

Display Only New Rows

Visual Basic .NET

1. In the code editor, select btnRowState in the ControlName list, and

then select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to the method:

3. Dim drNew As System.Data.DataRowView

4.

5. ‘Add a new order

6. drNew = Me.dvOrders.AddNew()

7. drNew(“CustomerID”) = “ALFKI”

8. drNew(“EmployeeID”) = 1

9. drNew(“OrderID”) = 0

10.

11. ‘Set the RowStateFilter

12. Me.dvOrders.RowStateFilter = DataViewRowState.Added

13.

14. ‘Refresh the datagrid

Me.dgOrders.Refresh()

The code first creates a new DataRowView (we’ll examine the AddNew

method in the following section), and then sets the RowStateFilter to display

only new (or added) rows. Finally the dgOrders data grid is refreshed to

display the changes.

15. Press F5 to run the application, and then click Row State.

The data grid shows only the new order.

16. Close the application.

Microsoft ADO.Net – Step by Step 182

Visual C# .NET

1. In the Form Designer, double-click the Row State button.

2. Visual Studio adds the Click event handler to the code window.

3. In the code editor, add a Click event handler for the btnRowState

button after the btnSort event handler that we created in the

previous exercise:

4. private void btnRowState_Click (object sender,

System.EventArgs e)

5. {

6.

}

7. Add the following code to the method:

8. System.Data.DataRowView drNew;

9.

10. //Add a new row

11. drNew = this.dvOrders.AddNew();

12. drNew[“CustomerID”] = “AFLKI”;

13. drNew[“EmployeeID”] = 1;

14. drNew[“OrderID”] = 0;

15.

16. //Set the RowStateFilter

17. this.dvOrders.RowStateFilter = DataViewRowState.Added;

18.

19. //Refresh the datagrid

this.dgOrders.Refresh();

The code first creates a new DataRowView (we’ll examine the AddNew

method in the following section), and then sets the RowStateFilter to display

only new (or added) rows. Finally the dgOrders data grid is refreshed to

display the changes.

20. Press F5 to run the application, and then click Row State.

The data grid shows only the new order.

21. Close the application.

Microsoft ADO.Net – Step by Step 183

DataView Methods

The primary methods supported by the DataView are shown in Table 8-9. The AddNew

method adds a new DataRowView to the DataView, while the Delete method deletes the

row at the specified index.

Table 8-9: DataView Methods

Method Description

AddNew Adds a new

DataRowVie

w to the

DataView

Delete Removes a

DataRowVie

w from a

DataView

Find Finds one or

more

DataRowVie

ws

containing

the primary

key value(s)

that are

specified

The Find Method

The DataView’s Find method finds one or more rows based on primary key values. If you

want to find a row based on some other column value, you must use the RowFilter

property of the DataView.

There are two versions of the Find method, allowing you to pass either a single value or

an array of values. The Find method returns the index of the row that was found (or an

array of rows if an array of primary keys is provided) or Null if the value is not found in

the DataView.

Find a Row Based on Its Primary Key Value

Visual Basic .NET

1. In the code editor, select btnFind in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the method:

3. Dim idxFound As Integer

4. Dim strMessage As String

5.

6. idxFound = Me.dvOrders.Find(10255)

7.

8. strMessage = “The OrderID is ” & _

9. Me.dvOrders(idxFound).Item(“OrderID”)

10. strMessage &= vbCrLf & “The CustomerID is ” & _

11. Me.dvOrders(idxFound).Item(“CustomerID”)

12. strMessage &= vbCrLf & “The EmployeeID is ” & _

13. Me.dvOrders(idxFound).Item(“EmployeeID”)

Microsoft ADO.Net – Step by Step 184

MessageBox.Show(strMessage)

The code uses the Find method to find Order 10255 and then displays the

results in a message box.

14. Press F5 to run the application, and then click Find.

The application displays the results.

15. Close the application.

Visual C# .NET

1. In the form designer, double-click the Find button.

Visual Studio adds the Click event handler to the code window.

2. In the code editor, add a Click event handler for the btnFind button

after the btnRowState event handler that we created in the previous

exercise:

3. private void btnFind_Click (object sender, System.EventArgs e)

4. {

5.

}

6. Add the following code to the method:

7. int idxFound;

8. string strMessage;

9.

10. idxFound = this.dvOrders.Find(10255);

11.

12. strMessage = “The OrderID is ” +

13. this.dvOrders[idxFound][“OrderID”];

14. strMessage += “\nThe CustomerID is ” +

15. this.dvOrders[idxFound][“CustomerID”];

16. strMessage += “\nThe EmployeeID is ” +

17. this.dvOrders[idxFound][“EmployeeID”];

MessageBox.Show(strMessage);

The code uses the Find method to find Order 10255 and then displays the

results in a message box.

18. Press F5 to run the application, and then click Find.

The application displays the results.

Microsoft ADO.Net – Step by Step 185

19. Close the application.

Chapter 8 Quick Reference

To Do this

Add a DataView to a form Drag a DataView control from the Data tab of

the Toolbox onto the form

Create a DataView at run

time

Use one of the New constructors. For example:

Dim myDataView as New

System.Data.DataView()

Sort DataView rows Set the Sort property of the DataView. For

example:

myDataView.Sort = “CustomerID”

Filter DataView rows Set the RowFilter or RowStateFilter property.

For example:

myDataView.RowStateFilter =

DataViewRowState.Added

Find a row in a DataView Pass the primary key value to the DataView’s

Find method. For example:

idxFound = myDataView.Find(1011)

Part IV: Using the ADO.NET Objects

Chapter 9: Editing and Updating Data

Chapter 10: ADO.NET Data-Binding in Windows Forms

Chapter 11: Using ADO.NET in Windows Forms

Chapter 12: Data-Binding in Web Forms

Chapter 13: Using ADO.NET in Web Forms

Chapter 9: Editing and Updating Data

Overview

In this chapter, you’ll learn how to:

§ Use the RowState property of a DataRow

§ Retrieve a specific version of a DataRow

Microsoft ADO.Net – Step by Step 186

§ Add a row to a DataTable

§ Delete a row from a DataTable

§ Edit a DataRow

§ Temporarily suspend enforcement of constraints during updates

§ Accept changes to data

§ Reject changes to data

In the previous few chapters, we’ve examined each of the Microsoft ADO.NET objects in

turn. Starting with this chapter, we’ll look at how these objects work together to perform

specific tasks. Specifically, in this chapter, we’ll examine the process of editing and

updating data.

Understanding Editing and Updating Data

Given the disconnected architecture of ADO.NET, there are four distinct phases to the

process of editing and updating data from a data source: data retrieval, editing, updating

the data source, and finally, updating the DataSet.

First, the data is retrieved from the data source, stored in memory, and possibly

displayed to the user. This is typically done using the Fill method of a DataAdapter to fill

the tables of a DataSet, but as we’ve seen, data may also be retrieved using a

Command and a DataReader.

Next, the data is modified as required. Values can be changed, new rows can be added,

and existing rows can be deleted. Data modification can be done under programmatic

control or by the data binding mechanisms of Windows Forms and Web Forms.

We’ll be exploring how to make changes to data under programmatic control in this

chapter. In Windows Forms, the data binding architecture handles transmitting changes

from data-bound controls to the dataset. No other action is required. In Web Forms, any

data changes must of course be submitted to the server.

Roadmap We’ll examine the data binding mechanisms of Windows

Forms and Web Forms in Chapters 10 and 11.

If the changes made to the in-memory copy of the data are to be persisted, they must be

propagated to the data source. If a DataSet is used for managing the in-memory data,

the data source propagation can be done by using the Update method of the

DataAdapter. Alternatively, Command objects may be used directly to submit the

changes. (Of course, as we saw in Chapter 3, the DataAdapter uses Command objects

to submit the changes, as well.)

Finally the DataSet can be updated to reflect the new state of the data source. This is

done by using the AcceptChanges method of the DataSet or DataTable. Both the Fill

method and the Update method of the DataAdapter call AcceptChanges automatically. If

you execute Data Commands directly, you must call AcceptChanges explicitly to update

the status of the DataSet.

Concurrency

With the disconnected methodology used by ADO.NET, there is always a chance that a

row in the data source may have been changed since the time it was loaded into the

DataSet. This is a concurrency violation.

The Update method supports a DBConcurrencyException, which one might expect to

be thrown if a concurrency violation occurs. In fact, the DBConcurrencyException is

thrown whenever the number of rows updated by a Data Command is zero. This is

typically due to a concurrency violation, but it’s important to understand that this is not

necessarily the case.

Microsoft ADO.Net – Step by Step 187

DataRow States and Versions

As we saw in Chapter 7, the DataRow maintains a RowState property that indicates

whether the row has been added, deleted, or modified. In addition, the DataTable

maintains multiple copies of each row, each reflecting a different version of the DataRow.

We’ll explore both the RowState property and row versions in this section.

RowState Properties

The RowState property of the DataRow reflects the actions that have been taken since

the DataTable was created or since the last time the AcceptChanges method was called.

The possible values for RowState, as defined by the DataRowState enumeration, are

shown in Table 9-1.

Table 9-1: DataRowStates

Property Description

Added The

DataRow is

new

Deleted The

DataRow

has been

deleted from

the table

Detached The

DataRow

has not yet

been added

to a table

Modified The

contents of

the

DataRow

have been

changed

Unchanged The

DataRow

has not

been

modified

The baseline values of the rows in a DataSet are established when the AcceptChanges

method is called, either by the Fill or Update methods of the DataAdapter or explicitly by

program code. At that time, all of the DataRows have their RowState set to Unchanged.

Not surprisingly, if the value of any column of a DataRow is changed after

AcceptChanges is called, its RowState is set to Modified. If new DataRows are added to

the DataSet by using the Add method of the DataSet’s Row collection, their RowState

will be Added. The new rows will maintain the status of Added even if their contents are

changed before the next call to AcceptChanges.

If a DataRow is deleted by using the Delete method, it isn’t actually removed from the

DataSet until the AcceptChanges method is called. Instead, their RowState is set to

Deleted and, as we’ll see, its Current values are set to Null.

DataRows don’t necessarily belong to a DataTable. These independent rows will have a

RowState of Detached until they are added to the Rows collection of a table.

Microsoft ADO.Net – Step by Step 188

Row Versions

A DataTable may maintain multiple versions of any given DataRow, depending on the

actions that have been performed on it since the last time AcceptChanges was called.

The possible DataRowVersions are shown in Table 9-2.

Table 9-2: DataRowVersions

Version Meaning

Current The

current

values of

each

column

Default The

default

values

used for

new

rows

Original The

values

set when

the row

was

created,

either by

a Fill

operation

or by

adding

the row

manually

Proposed The

values

assigned

to the

columns

in a row

after a

BeginEdi

t method

has been

called

There will always be a Current version of every row in the DataSet. The Current version

of the DataRow reflects any changes that have been made to its values since the row

was created.

Rows that existed in the DataSet when AcceptChanges was last called will have an

Original version, which contains the initial data values. Rows that are added to the

DataSet will not contain an Original version until AcceptChanges is called again.

If any of the columns of a DataTable have values assigned to its DefaultValue property,

all the DataRows in the table will have a Default version, with the values determined by

the DefaultValues of each column.

DataRows will have a Proposed version after a call to DataRow.BeginEdit and before

either EndEdit or CancelEdit is called. We’ll examine these methods, which are used to

temporarily suspend data constraints, in the next section.

Microsoft ADO.Net – Step by Step 189

Exploring DataRow States and Versions

The example application for this chapter displays the Original and Current values of a

DataSet based on the EmployeeList view in the Northwind sample database. Because

the display is based on the Windows Form BindingContext object, which we won’t be

examining until Part V, the code to display these values is already in place.

1. Open the Editing project from the Start page or from the File menu.

2. Double-click Editing.vb (or Editing.cs, if you’re using C#) in the

Solution Explorer.

Microsoft Visual Studio displays the Editing form in the form designer.

3. Press F5 to run the application.

4. Use the navigation buttons at the bottom of the form to move through

the DataSet.

Note that all the rows have identical Current and Original versions and that

the RowStatus is Unchanged.

5. Change the value of the First Name or Last Name text box of one of

the rows, and then click Save.

The Current version of the row is updated to reflect the name, and the

RowStatus changes to Modified.

Microsoft ADO.Net – Step by Step 190

6. Close the application.

Editing Data in a DataSet

Editing data after it has been loaded into a DataSet is a straightforward process of calling

methods and setting property values. In this chapter, we’ll concentrate on manipulating

the contents of the DataSet programmatically, leaving the discussion of using Windows

and Web Form controls to Parts V and VI, respectively.

Roadmap We’ll examine editing using data-bound controls in Parts V

and VI.

Adding a DataRow

There is no way to create a new row directly in a DataTable. Instead, a DataRow object

must be created independently and then added to the DataTable’s Rows collection.

The DataTable’s NewRow method returns a detached row with the same schema as the

table on which it is called. The values of the row can then be set, and the new row

appended to the DataTable.

Add a Row to a DataTable

Visual Basic .NET

1. Double-click Add in the form designer.

Visual Studio opens the code editor and adds the Click event handler.

2. Add the following code to the procedure:

3. Dim drNew As System.Data.DataRow

4.

5. drNew = Me.dsEmployeeList1.EmployeeList.NewRow()

6. drNew.Item(“FirstName”) = “New First”

7. drNew.Item(“LastName”) = “New Last”

Me.dsEmployeeList1.EmployeeList.Rows.Add(drNew)

The first line declares the DataRow variable that will contain the new row.

Then the NewRow method is called, instantiating the variable; its fields are

set; and it is added to the Rows collection of the EmployeeList table.

8. Press F5 to run the application.

9. Click Add.

The application adds a new row.

10. Move to the last row in the DataSet by clicking the >> button.

The application displays the new row.

Microsoft ADO.Net – Step by Step 191

11. Close the application.

Visual C# .NET

1. Double-click Add in the form designer.

Visual Studio opens the code editor and adds the Click event handler.

2. Add the following code to the procedure:

3. dsEmployeeList.EmployeesRow drNew;

4.

5. drNew = (dsEmployeeList.EmployeesRow)

6. this.dsEmployeeList1.Employees.NewRow();

7. drNew[“FirstName”] = “New First”;

8. drNew[“LastName”] = “New Last”;

this.dsEmployeeList1.Employees.AddEmployeesRow(drNew);

The first line declares the DataRow variable that will contain the new row.

Then the NewRow method is called, instantiating the variable; its fields are

set; and it is added to the Rows collection of the EmployeeList table.

9. Press F5 to run the application.

10. Click Add.

The application adds a new row.

11. Move to the last row in the DataSet by clicking the >> button.

The application displays the new row.

12. Close the application.

Microsoft ADO.Net – Step by Step 192

Deleting a DataRow

The DataTable’s Rows collection supports three methods to remove DataRows, as

shown in Table 9-3. Each of these methods physically removes the DataRow from the

collection.

Table 9-3: Remove Methods

Method Description

Clear() Removes all

rows from

the

DataTable

Remove(DataRow) Removes

the specified

DataRow

RemoveAt(Index) Removes

the

DataRow at

the position

specified by

the integer

Index

However, a row that has been physically removed by using one of these methods won’t

be deleted from the data source. If you need to delete the row from the data source as

well, you must use the Delete method of the DataRow object instead.

The Delete method physically removes the DataRow only if it was added to the

DataTable since the last time AcceptChanges was called. Otherwise, it sets the

RowState to Deleted and sets the current values to Null.

Delete a DataRow Using the Delete method

Visual Basic .NET

1. In the code editor, select btnDelete in the ControlName list, and then

select Click from the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim dr As System.Data.DataRow

4.

5. ‘Get row currently displayed in the form

6. dr = GetRow()

7.

8. ‘Delete the row

9. dr.Delete()

10.

11. ‘Move to the next record & display

12. Me.BindingContext(Me.dsEmployeeList1,

“EmployeeList”).Position += 1

UpdateDisplay()

The GetRow and UpdateDisplay procedures, which are not intrinsic to the

.NET Framework, are contained in the Utility Functions region of the code.

13. Press F5 to run the application.

14. Use the navigation buttons to display the row for Nancy Davolio.

15. Click Delete.

Microsoft ADO.Net – Step by Step 193

The application deletes the row, displays the next row, and changes the

number of employees to 8.

16. Close the application.

Visual C# .NET

1. In the form designer, double-click the Delete button.

2. Visual Studio adds the Click event handler to the code window.

3. Add the following event handler to the code window:

4. System.Data.DataRow dr;

5.

6. //Get row currently displayed in the form

7. dr = GetRow();

8.

9. //Delete the row

10. dr.Delete();

11.

12. //Move to the next record & display

13. this.BindingContext[this.dsEmployeeList1,

“Employees”].Position += 1;

UpdateDisplay();

The GetRow and UpdateDisplay procedures, which are not intrinsic to the

.NET Framework, are contained in the Utility Functions region of the code.

14. Press F5 to run the application.

15. Use the navigation buttons to display the row for Nancy Davolio.

16. Click Delete.

The application deletes the row, displays the next row, and changes the

number of employees to 8.

Microsoft ADO.Net – Step by Step 194

17. Close the application.

Changing DataRow Values

Changing the value of a column in a DataRow couldn’t be simpler—just reference the

column using the Item property of the DataRow, and assign the new value to it by using

a simple assignment operator.

The Item property is overloaded, supporting the forms shown in Table 9-4. However, the

three forms of the property that specify a DataRowVersion are read-only and cannot be

used to change the values. The other three forms return the Current version of the value

and may be changed.

Table 9-4: DataRow Item Properties

Method Description

Item(columnName) Returns the

value of the

column with

the

ColumnNam

e property

identified by

the

columnNam

e string

Item(dataColumn) Returns the

value of the

specified

dataColumn

Item(columnIndex) Returns the

value of the

column

specified by

the

columnInde

x integer

value (the

Columns

collection is

zero-based)

Item(columnName, rowVersion) Returns the

value of the

rowVersion

Microsoft ADO.Net – Step by Step 195

Table 9-4: DataRow Item Properties

Method Description

version of

the column

with the

ColumnNam

e property

identified by

the

columnNam

e string

Item(dataColumn, rowVersion) Returns the

value of the

rowVersion

version of

the specified

dataColumn

Item(columnIndex, rowVersion) Returns the

value of the

rowVersion

version of

the column

specified by

the

columnInde

x integer

value

Edit a DataRow

Visual Basic .NET

1. In the code editor, select btnEdit in ControlName list, and then select

Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim drCurrent As System.Data.DataRow

4.

5. drCurrent = GetRow()

6. drCurrent.Item(“FirstName”) = “Changed ”

UpdateDisplay()

Again, the GetRow and UpdateDisplay procedures, which reference the

Windows Form data binding architecture, are not intrinsic to the .NET

Framework. They are in the Utility Functions region of the code.

7. Press F5 to run the application.

8. Click Edit.

The application changes the Current version of the FirstName column to

Changed and changes the RowStatus to Modified.

Microsoft ADO.Net – Step by Step 196

9. Close the application.

Visual C# .NET

1. In the form designer, double-click the Edit button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. System.Data.DataRow drCurrent;

4.

5. drCurrent = GetRow();

6. drCurrent[“FirstName”] = “Changed “;

UpdateDisplay();

Again, the GetRow and UpdateDisplay procedures, which reference the

Windows Form data binding architecture, are not intrinsic to the .NET

Framework. They are in the Utility Functions region of the code.

7. Press F5 to run the application.

8. Click Edit.

The application changes the Current version of the FirstName column to

Changed and changes the RowStatus to Modified.

9. Close the application.

Microsoft ADO.Net – Step by Step 197

Deferring Changes to DataRow Values

Sometimes it’s necessary to temporarily suspend validation of data until a series of edits

have been performed, either for performance reasons or because rows will temporarily

be in violation of business or integrity constraints.

BeginEdit does just that—it suspends the Column and Row change events until either

EndEdit or CancelEdit are called. During the editing process, assignments are made to

the Proposed version of the DataRow instead of to the Current version. This is the only

time the Proposed version exists.

If the edit is completed by calling EndEdit, the Proposed column values are copied to the

Current version and the Proposed version of the DataRow is removed. If the edit is

completed by calling CancelEdit, the Proposed version of the DataRow is removed,

leaving the Current column values unchanged. In effect, EndEdit and CancelEdit commit

and rollback the changes, respectively.

Use BeginEdit to Defer Column Changes

Visual Basic .NET

1. In the code editor, select btnDefer in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to the procedure:

3. Dim drCurrent As System.Data.DataRow

4.

5. drCurrent = GetRow()

6. With drCurrent

7. .BeginEdit()

8. .Item(“FirstName”) = “Proposed Name”

9. MessageBox.Show(drCurrent.Item(“FirstName”,

DataRowVersion.Proposed))

10. .CancelEdit()

End With

11. Press F5 to run the application.

12. Click Defer.

The application displays Proposed Name in a message box.

13. Click OK to close the message box.

Because the edit was canceled, the Current value of the column and the

RowStatus remain unchanged.

Microsoft ADO.Net – Step by Step 198

14. Close the application.

Visual C# .NET

1. In the form designer, double-click the Defer button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. System.Data.DataRow drCurrent;

4.

5. drCurrent = GetRow();

6.

7. drCurrent.BeginEdit();

8. drCurrent[“FirstName”]= “Proposed Name”;

9. MessageBox.Show(drCurrent[“First Name”,

10. System.Data.DataRowVersion.Proposed].ToString());

drCurrent.CancelEdit();

11. Press F5 to run the application.

12. Click Defer.

The application displays Proposed Name in a message box.

13. Click OK to close the message box.

Because the edit was canceled, the Current value of the column and the

RowStatus remain unchanged.

Microsoft ADO.Net – Step by Step 199

14. Close the application.

Updating Data Sources

After changes have been made to the in-memory copy of the data represented by the

DataSet, they can be propagated to the data source either by executing the appropriate

Command objects against a connection or by calling the Update method of the

DataAdapter (which, of course, executes the Command objects that it references).

Using the DataAdapter’s Update Method

The System.Data.Common.DbDataAdapter, which you will recall is the DataAdapter

class from which relational database Data Providers inherit their DataAdapters, supports

a number of versions of the Update method, as shown in Table 9-5. Neither the

SqlDataAdapter nor the OleDbDataAdapter add any additional versions.

Table 9-5: DbDataAdapter Update Methods

Update Method Description

Update(DataSet) Updates the

data source

from a

DataTable

named Table in

the specified

DataSet

Update(dataRows) Updates the

data source

from the

specified array

of dataRows

Update(DataTable) Updates the

data source

from the

specified

DataTable

Update(dataRows, DataTableMapping) Updates the

data source

from the

specified array

of dataRows,

using the

specified

Microsoft ADO.Net – Step by Step 200

Table 9-5: DbDataAdapter Update Methods

Update Method Description

DataTableMap

ping

Update(DataSet, sourceTable) Updates the

data source

from the

DataTable

specified in

sourceTable in

the specified

DataSet

The Command object exposes a property called RowUpdated that controls whether the

DataSet will be updated using any results from executing the SQL command on the data

source. The possible values for the OnRowUpdated property are shown in Table 9-6.

Table 9-6: UpdateRowSource Values

Value Description

Both Maps both

the output

parameters

and the first

returned row

to the

changed

row in the

DataSet

FirstReturnedRecord Maps the

values in the

first returned

row to the

changed

row in the

DataSet

None Ignores any

output

parameters

or returned

rows

OutputParameters Maps output

parameters

to the

changed

row in the

DataSet

By default, commands that are automatically generated for a DataAdapter will have their

UpdatedRowSource values set to None. Commands that are created by setting the

CommandText property, either in code or by using the Query Builder, will default to Both.

When the Update method is called, the following actions occur:

1. The DataAdapter examines the RowState of each row in the specified

DataSet or DataTable and executes the appropriate command—insert,

update, or delete.

2. The Parameters collection of the appropriate Command object will be

filled based on the SourceColumn and SourceVersion properties.

3. The RowUpdating event is raised.

Microsoft ADO.Net – Step by Step 201

4. The command is executed.

5. Depending on the value of the OnRowUpdated property, the

DataAdapter may update the row values in the DataSet.

6. The RowUpdated event is raised.

7. AcceptChanges is called on the DataSet or DataTable.

Update a Data Source

Visual Basic .NET

1. In the code editor, select btnUpdate in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler.

2. Add the following code to the procedure:

3. Me.daEmployeeList.Update(Me.dsEmployeeList1.EmployeeList)

UpdateDisplay()

4. Press F5 to run the application.

5. Type Changed after Steven in the First Name text box, and then click

Save.

The application sets the Current value of the column to Steven Changed.

6. Click Update.

The application updates the data source and then resets the contents of the

DataSet.

7. Close the application.

Microsoft ADO.Net – Step by Step 202

Visual C# .NET

1. In the form designer, double-click the Update button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. this.daEmployeeList.Update(this.dsEmployeeList1.Employees);

UpdateDisplay();

4. Press F5 to run the application.

5. Type Changed after Steven in the First Name text box, and then click

Save.

The application sets the Current value of the column to Steven Changed.

6. Click Update.

The application updates the data source and then resets the contents of the

DataSet.

7. Close the application.

Executing Command Objects

The DataAdapter’s Update method, although very convenient, isn’t always the best

choice for persisting changes to a data source. Sometimes, of course, you won’t be

using a DataAdapter. Sometimes you’ll be using a structure other than a DataSet to

store the data. And sometimes, in order to maintain data integrity, it will be necessary to

perform operations in a particular order. In any of these situations, you can use

Command objects to control the order in which the updates are performed.

When the DataAdapter’s Update method is used to propagate changes to a data source,

it will use the SourceColumn and SourceVersion properties to fill the Parameters

Microsoft ADO.Net – Step by Step 203

collection. As we saw in Chapter 8, when executing a Command object directly, you

must explicitly set the Parameter values.

Update a Data Source Using a Data Command

Visual Basic .NET

1. In the code editor, select btnCmd in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim cmdUpdate As System.Data.SqlClient.SqlCommand

4. Dim drCurrent As System.Data.DataRow

5.

6. cmdUpdate = Me.daEmployeeList.UpdateCommand

7. drCurrent = GetRow()

8.

9. cmdUpdate.Parameters(“@first”).Value = drCurrent(“FirstName”)

10. cmdUpdate.Parameters(“@last”).Value =

drCurrent(“LastName”)

11. cmdUpdate.Parameters(“@empID”).Value =

drCurrent(“EmployeeID”)

12.

13. Me.cnNorthwind.Open()

14. cmdUpdate.ExecuteNonQuery()

Me.cnNorthwind.Close()

This code first creates two temporary variables, and then it sets them to the

Update command of the daEmployeeList DataAdapter and the row currently

being displayed on the form, respectively. It then sets the three parameters in

the Update command to the values of the row. Finally the connection is

opened, the command executed, and the connection closed.

15. In the code editor, select btnFill in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

16. Add the following code to the procedure:

17. Me.dsEmployeeList1.EmployeeList.Clear()

18. Me.daEmployeeList.Fill(Me.dsEmployeeList1.EmployeeList)

UpdateDisplay()

This code reloads the data into the DataSet from the data source, and then it

updates the version and row status information of the form.

19. Press F5 to run the application.

20. In the First Name text box, change Steven Changed to Steven, and

then click Save.

The application updates the Current value of the DataRow.

Microsoft ADO.Net – Step by Step 204

21. Click Command.

The application updates the data source, but because executing the

command directly does not update the DataSet, the change isn’t reflected.

22. Click Fill.

The application reloads the data. Note that the First Name text box has been

changed.

23. Close the application.

Microsoft ADO.Net – Step by Step 205

Visual C# .NET

1. In the form designer, double-click the Command button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code editor:

3. System.Data.SqlClient.SqlCommand cmdUpdate;

4. System.Data.DataRow drCurrent;

5.

6. cmdUpdate = this.daEmployeeList.UpdateCommand;

7. drCurrent = GetRow();

8.

9. cmdUpdate.Parameters[“@FirstName”].Value =

drCurrent[“FirstName”];

10. cmdUpdate.Parameters[“@LastName”].Value =

drCurrent[“LastName”];

11. cmdUpdate.Parameters[“@empID”].Value =

drCurrent[“EmployeeID”];

12.

13. this.cnNorthwind.Open();

14. cmdUpdate.ExecuteNonQuery();

15. this.cnNorthwind.Close();

16.

17. this.dsEmployeeList1.AcceptChanges();

18. UpdateDisplay();

This code first creates two temporary variables, and then it sets them to the

Update command of the daEmployeeList DataAdapter and the row currently

being displayed on the form, respectively. It then sets the three parameters in

the Update command to the values of the row. Finally the connection is

opened, the command executed, and the connection closed.

19. In the form designer, double-click the Fill button.

Visual Studio adds the event handler to the code window.

20. Add the following procedure to the code window:

21. this.dsEmployeeList1.Employees.Clear();

22. this.daEmployeeList.Fill(this.dsEmployeeList1.Employees);

UpdateDisplay();

This code reloads the data into the DataSet from the data source and then

updates the version and row status information of the form.

23. Press F5 to run the application.

24. In the First Name text box, change Steven Changed to Steven, and

then click Save.

The application updates the Current value of the DataRow.

Microsoft ADO.Net – Step by Step 206

25. Click Command.

The application updates the data source, but because executing the

command directly does not update the DataSet, the change isn’t reflected.

26. Click Fill.

The application reloads the data. Note that the First Name text box has been

changed.

27. Close the application.

Microsoft ADO.Net – Step by Step 207

Accepting and Rejecting DataSet Changes

The final step in the process of updating data is to set a new baseline for the DataRows.

This is done by using the AcceptChanges method. The DataAdapter’s Update method

calls AcceptChanges automatically. If you execute a command directly, you must call

AcceptChanges to update the row state values.

If instead of accepting the changes made to the DataSet, you want to discard them, you

can call the RejectChanges method. RejectChanges returns the DataSet to the state it

was in the last time AcceptChanges was called, discard-ing all new rows, restoring

deleted rows, and returning all columns to their original values.

Important If you call AcceptChanges or RejectChanges prior to

updating the data source, you will lose the ability to persist

the changes made since the last time AcceptChanges was

called using the Update method. The DataAdapter’s Update

method uses the RowStatus property to determine which

rows to persist, and both AcceptChanges and

RejectChanges set the RowStatus of every row to

Unchanged.

Using AcceptChanges

The AcceptChanges method is supported by the DataSet, the DataTable, and the

DataRow. Under most circumstances, you need only call AcceptChangeson the DataSet

because it calls AcceptChanges for each DataTable that it contains, and the DataTable,

in turn, calls AcceptChanges for each DataRow.

When the AcceptChanges call reaches the DataRow, rows with a RowStatus of either

Added or Modified will have the Original values of each column changed to the Current

values, and their RowStatus will be set to Unchanged. Deleted rows will be removed

from the Rows collection.

Accept Changes to a DataSet

Visual Basic .NET

1. Add the following code to the end of the btnCmd_Click procedure that

you created in the previous exercise:

2. Me.dsEmployeeList1.AcceptChanges()

UpdateDisplay()

3. Press F5 to run the application.

4. In the Last Name text box, type New after Buchanan, and then click

Save.

The application updates the Current value.

5. Click Command.

Because the AcceptChanges method is called, the Version and RowStatus

information is updated.

Microsoft ADO.Net – Step by Step 208

6. In the Last Name text box, change Buchanan New back to Buchanan,

and then click Save.

The application updates the Current value and RowStatus.

7. Click Accept.

The application updates the Original value and RowStatus.

8. Click Update, and then click Fill.

Because the RowStatus of the DataRow had been reset to Unchanged, no

changes were persisted to the data source.

Microsoft ADO.Net – Step by Step 209

9. Close the application.

Visual C# .NET

1. Add the following code to the end of the btnCmd_Click procedure that

you created in the previous exercise:

2. this.dsEmployeeList1.AcceptChanges();

UpdateDisplay();

3. Press F5 to run the application.

4. In the Last Name text box, type New after Buchanan, and then click

Save.

The application updates the Current value.

5. Click Command.

Because the AcceptChanges method is called, the Version and RowStatus

information is updated.

Microsoft ADO.Net – Step by Step 210

6. In the Last Name text box, change Buchanan New back to Buchanan,

and then click Save.

The application updates the Current value and RowStatus.

7. Click Accept.

The application updates the Original value and RowStatus.

8. Click Update, and then click Fill.

Because the RowStatus of the DataRow had been reset to Unchanged, no

changes were persisted to the data source.

Microsoft ADO.Net – Step by Step 211

9. Close the application.

Using RejectChanges

Like AcceptChanges, the RejectChanges method is supported by the DataSet,

DataTable, and DataRow objects, and each object cascades the call to the objects below

it in the hierarchy.

When the RejectChanges call reaches the DataRow, rows with a RowStatus of either

Deleted or Modified will have the Original values of each column changed to the Current

values, and their RowStatus will be set to Unchanged. Added rows will be removed from

the Rows collection.

Reject the Changes to a DataRow

Visual Basic .NET

1. In the code editor, select btnReject in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Me.dsEmployeeList1.RejectChanges()

UpdateDisplay()

4. Press F5 to run the application.

5. In the First Name text box, change Stephen to Reject, and then click

Save.

The application updates the Current value and RowStatus.

6. Click Reject.

Microsoft ADO.Net – Step by Step 212

The application returns the Current version of the row to its Original values

and then resets the RowStatus to Unchanged.

7. Close the application.

Visual C# .NET

1. In the form designer, double-click the Reject button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code editor:

3. this.dsEmployeeList1.RejectChanges();

UpdateDisplay();

4. Press F5 to run the application.

5. In the First Name text box, change Stephen to Reject, and then click

Save.

The application updates the Current value and RowStatus.

6. Click Reject.

The application returns the Current version of the row to its Original values

and then resets the RowStatus to Unchanged.

Microsoft ADO.Net – Step by Step 213

7. Close the application.

Chapter 9 Quick Reference

To Do this

Add a row to a DataTable Use the NewRow method of the DataTable to

create the row, and then use the Add method of

the Rows collection:

newRow = myTable.NewRow()

myTable.Rows.Add(newRow)

Delete a row from a

DataTable

Use the Delete method of the DataRow:

myRow.Delete()

Change the values in a

DataReader

Use the DataRow’s Item property:

myRow.Item(“Row Name”) = newValue

Suspend constraint

enforcement

Use BeginEdit combined with either EndEdit or

CancelEdit:

myRow.BeginEdit()

myRow.Item(“Row Name”) = newValue

myRow.EndEdit()

Or:

myRow.BeginEdit()

myRow.Item(“Row Name”) = newValue

myRow.CancelEdit()

Accept changes to data Use the AcceptChanges method of the DataSet,

DataTable, or DataRow:

myDataSet.AcceptChanges()

Reject changes to data Use the RejectChanges method of the DataSet,

DataTable, or DataRow:

myDataSet.RejectChanges()

Chapter 10: ADO.NET Data-Binding in Windows

Forms

Overview

In this chapter, you’ll learn how to:

Microsoft ADO.Net – Step by Step 214

§ Simple-bind control properties using the Properties window

§ Simple-bind control properties using the Advanced Binding dialog box

§ Simple-bind control properties at run time

§ Complex-bind control properties using the Properties window

§ Complex-bind control properties at run time

§ Use CurrencyManager properties

§ Respond to CurrencyManager events

§ Use the Binding object’s properties

In previous chapters, we have, of course, been binding data to controls on Windows

Forms, but we haven’t really looked at the process in any detail. We’ll begin to do that in

this chapter. We’ll start by examining the underlying mechanisms used to bind Windows

Forms controls to Microsoft ADO.NET data sources. In Chapter 11, we’ll examine the

techniques used to perform some common data-binding tasks.

Understanding Data-Binding in Windows Forms

The Microsoft .NET Framework provides an extremely powerful and flexible mechanism

for binding data to properties of controls. Although in the majority of cases you will bind

to the displayed value of a control—for example, the DisplayMember property of a

ListBox control or the Text property of a TextBox control—you can bind any property of a

control to a data source.

This makes it possible, for example, to bind the background and foreground colors of a

form and the font characteristics of its controls to a row in a database table. By using this

technique, you could allow users to customize an application’s user interface without

requiring any changes to the code base.

Data Sources

Windows Forms controls can be bound to any data source, not just traditional database

tables. Technically, to qualify as a data source, an object must implement the IList,

IBindingList, or IEditableObject interface.

The IList interface, the simplest of the three, is implemented by arrays and collections.

This means that it’s possible, for example, to bind the Text property of a label to the

contents of a ListBox control’s ObjectCollection (although it’s difficult to think of a

situation in which doing so might be useful). Any object that implements both the IList

and the IComponent interfaces can be bound at design time as well as at run time.

The IBindingList interface, which is implemented by the DataView and

DataViewManager objects, supports change notification. Objects that implement this

interface raise ListChanged events to notify the application when either an item in the

list or the list itself has been changed.

Finally, the IEditableObject interface, which is implemented by the DataRowView

object, exposes the BeginEdit, EndEdit, and CancelEdit methods.

Fortunately, when you’re working within ADO.NET, you can largely ignore the details of

interface implementation. They’re really only important if you are building your own

data source objects.

Within the .NET Framework, the actual binding of data in a Windows form is

handled by a number of objects working in conjunction, as shown below.

Microsoft ADO.Net – Step by Step 215

At the highest level in the logical architecture is the BindingContext object. Any

object that inherits from the Control class can contain a BindingContext object.

In most cases, you’ll use the form’s BindingContext object, but if your form

includes a container control, such as a Panel or a GroupBox, that contains

data-bound controls, it may be easier to create a separate BindingContext

object for the container control because it saves a level of indirection when

referencing the contained controls.

The BindingContext object manages one or more BindingManagerBase

objects, one for each data source that is referenced by the form. The

BindingManagerBase is an abstract class, so instances of this object cannot be

directly instantiated. Instead, the objects managed by the BindingContext

object will actually be instances of either the PropertyManager class or the

CurrencyManager class. All of these objects are implemented in the

System.Windows.Forms namespace.

If the data source can return only a single value, the BindingManagerBase

object will be an instance of the PropertyManager class. If the data source

returns (or can return) a collection of objects, the BindingManagerBase object

will be an instance of the CurrencyManager class. ADO.NET objects will

always instantiate CurrencyManagers.

The CurrencyManager object keeps track of position in the list and

managesthe bindings to that data source. Note that the data source itself

doesn’t know which item is being displayed.

ADO The CurrencyManager’s Position property maintains the current

row in a data source. ADO.NET data sources don’t support

cursors and therefore have no knowledge of the ‘current’ row. This

may at first seem awkward, but is actually a more powerful

architecture because it’s now possible to maintain multiple

‘cursors’ in a single data source.

There is a separate instance of the CurrencyManager object for each discrete

data source. If all of the controls on a form bind to a single data source, there

will be a single CurrencyManager. For example, a form that contains text

boxes displaying fields from a single table will contain a single

CurrencyManager object. However, if there are multiple data sources, as in a

form that displays master/detail information, there will be separate

CurrencyManager objects for each data source.

Windows Forms controls contain a DataBindings collection that contains the

Binding objects for that control. The Binding object, as we’ll see, specifies the

data source, the control that is being bound, and the property of the control that

will display the data for simple-bound properties.

Microsoft ADO.Net – Step by Step 216

The CurrencyManager inherits a BindingsCollection property from the

BindingManagerBase class. The BindingsCollection contains references to the

Binding objects for each control.

Binding Controls to an ADO.NET Data Source

Windows Forms controls in the .NET Framework support two different types of data

binding: simple and complex. The distinction is really quite simple. Control properties that

contain a single value are simple-bound, while properties that contain multiple values,

such as the displayed contents of list boxes and data grids, are complex-bound.

Any given control can contain both simple-bound and complex -bound attributes. For

example, the MonthCalendar control’s MaxDate property, which determines the

maximum allowable selected date, is a simple-bound property containing a single

DateTime value, while its BoldedDates property, which contains an array of dates that

are to be displayed in bold formatting, would be complex-bound.

Simple-Binding Control Properties

In the .NET Framework, any property of a control that contains a single value can be

simple-bound to a single value in a data source.

Binding can take place either at design time or at run time. In either situation, you must

specify three values: the name of property to be bound, the data source, and a

navigation path within the data source that resolves to a single value.

The navigation path consists of a period-delimited hierarchy of names. For example, to

reference the ProductID column of the Products table, the navigation path would be

Products.ProductID.

The Microsoft Visual Studio .NET Properties window contains a Data Bindings section

that displays the properties that are most commonly data-bound. Other properties are

available through the (Advanced) section, which opens the Advanced Data Binding

dialog box. The Advanced Data Binding dialog box provides design time access to all the

simple-bound properties of the selected control.

Bind a Property Using the Properties Window

1. Open the Binding project from the Start page or by using the File

menu.

2. In the Solution Explorer, double-click Binding.vb (or Binding.cs, if

you’re using C#) to open the form.

Visual Studio displays the form in the form designer.

3. Select the tbCategoryID text box (after the Category ID label).

Microsoft ADO.Net – Step by Step 217

4. In the Properties window, expand the Data Bindings section, and then

open the drop-down list for the Text property.

5. Expand dsMaster1, expand Categories, and then select CategoryID.

Bind a Property Using the Advanced Binding Dialog Box

1. In the form designer, select the tbCategoryName text box (after the

Name label).

2. In the Properties window, expand the DataBindings section (if

necessary), and then click the Ellipsis button after the (Advanced)

property.

Visual Studio opens the Advanced Data Binding dialog box with the Text

property selected.

3. Open the drop-down list for the Text property, expand dsMaster,

expand Categories, and then select CategoryName.

4. Click Close.

Visual Studio sets the data binding. Because Text is one of the default databound

properties, its value is shown in the Properties window.

Microsoft ADO.Net – Step by Step 218

When you bind a control at design time, you simply select the appropriate column from

the drop-down list in the Properties window or the Advanced Data Binding dialog box.

When you’re binding at run time, you must specify two values separately.

The .NET Framework provides a lot of flexibility in how you specify the data source and

navigation path values when creating a binding at run time. For example, both of the

following Binding objects will refer to the ProductID column of the Products table:

bndFirst = New System.Windows.Forms.Binding(“Text”, Me.dsMaster1, _

“Products.ProductID”)

bndSecond = New System.Windows.Forms.Binding(“Text”, _

Me.dsMaster.Products, “ProductID”)

However, because the data source properties are different, the .NET Framework will

create different CurrencyManagers to manage them, and the controls on the form will not

be synchronized.

In some situations, this might be useful. For example, you might need to display two

different rows of a table on a single form, and this technique makes it easy to do so.

However, in the majority of cases, you’ll want all the controls on a form that are bound to

the same table to display information from the same row, and in order to achieve this,

you must be consistent in the way you specify the data source and navigation path

values.

Tip If you’re creating a binding at run time that you want synchronized

with design-time bindings, specify only the top-level of the hierarchy

as the data source:

bndFirst = New System.Windows.Forms.Binding(“Text”,

Me.dsMaster1, “Products.ProductID”)

Bind a Property at Run Time

Visual Basic .NET

1. In the form designer, double-click the Simple button.

Visual Studio opens the code editor and adds the btnSimple Click event

handler.

2. Add the following lines to bind the tbCategoryDescription text box to

the Categories.Description column:

3. Dim newBinding As System.Windows.Forms.Binding

4.

5. newBinding = New System.Windows.Forms.Binding(“Text”, _

6. Me.dsMaster1, “Categories.Description”)

Me.tbCategoryDescription.DataBindings.Add(newBinding)

This code first declares a new Binding object, and then instantiates it by

passing the property name (“Text”), data source (Me.dsMaster1), and

navigation path (“Categories.Description”) to the constructor. Finally, the new

Binding object is added to the DataBindings collection of the

tbCategoryDescription control by using the Add method.

7. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 219

8. Click the Simple button.

The application adds the binding and displays the value in the text box.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

9. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its description.

Microsoft ADO.Net – Step by Step 220

Important If we had passed dsMaster1.Categories as the data source and

“Description” as the navigation path to the Binding’s constructor,

the Description field would not display data from the current row

because Visual Studio would have created a second

CurrencyManager. When creating bindings that are to be

synchronized with design-time bindings, be sure to specify only

the DataSet as the data source.

10. Close the application.

Visual C# .NET

1. In the form designer, double-click the Simple button.

Visual Studio opens the code editor and adds the btnSimple Click event

handler.

2. Add the following lines to bind the tbCategoryDescription text box to

the Categories.Description column:

3. System.Windows.Forms.Binding newBinding;

4.

5. newBinding = new System.Windows.Forms.Binding(“Text”,

6. this.dsMaster1, “Categories.Description”);

7. this.tbCategoryDescription.DataBindings.Add(newBinding);

This code fi rst declares a new Binding object, and then instantiates it by

passing the property name (“Text”), data source (Me.dsMaster1), and

navigation path (“Categories.Description”) to the constructor. Finally, the new

Binding object is added to the DataBindings collection of the

tbCategoryDescription control by using the Add method.

8. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 221

9. Click the Simple button.

The application adds the binding and displays the value in the text box.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

10. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its description.

Microsoft ADO.Net – Step by Step 222

Important If we had passed dsMaster1.Categories as the data source and

“Description” as the navigation path to the Binding’s constructor,

the Description field would not display data from the current row

because Visual Studio would have created a second

CurrencyManager. When creating bindings that are to be

synchronized with design-time bindings, be sure to specify only

the DataSet as the data source.

11. Close the application.

Complex-Binding Control Properties

Unlike simple-bound properties, which must be bound to a single value, complex-bound

control properties contain (and possibly display) multiple items. The most common

examples of complex-bound controls are, of course, the ListBox and ComboBox, but any

control property that accepts multiple values can be complex-bound.

Although the techniques can vary somewhat depending on the specific control, most

complex-bound controls are bound by setting the DataSource property directly rather

than by adding a Binding object to the DataBindings collection.

The most common complex-bound controls, the ListBox, ComboBox, and DataGrid, also

expose a DisplayMember property, which determines what will be displayed by the

control. In the case of the ListBox and ComboBox controls, the DisplayMember property

must resolve to a single value, while the DataGrid control can display multiple values for

each row (for example, all the columns of a DataTable).

Roadmap We’ll examine the use of the ValueMember property to create

look-up tables in Chapter 11.

In addition, the ListBox and ComboBox controls expose a ValueMember property, which

allows the control to display a user-friendly name while updating an underlying DataSet

with the value of a different column.

One particularly convenient possibility when using complex-bound controls is to bind to a

relationship rather than to a DataSet, which causes the items displayed in the control to

be automatically filtered. We’ll see an example of this technique in the following exercise.

Add a Complex Data-Binding Using the Properties Window

1. In the form designer, select the lbProducts ListBox.

2. In the Properties window, select DataSource, and then select

dsMaster1 from the drop-down list.

Microsoft ADO.Net – Step by Step 223

3. In the DisplayMember drop-down list, expand Categories, expand

CategoryProducts, and then select the ProductName column.

4. Press F5 to run the application.

Visual Studio displays the products in the current category.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

5. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its products.

6. Close the application.

Add a Complex Data-Binding at Run Time

Visual Basic .NET

1. In the form designer, double-click the Complex button.

Visual Studio opens the code editor and adds the Click event handler for the

btnComplex button.

2. Add the following code to the event handler:

3. Me.lbOrderDates.DataSource = Me.dvOrderDates;

Microsoft ADO.Net – Step by Step 224

Me.lbOrderDates.DisplayMember = “OrderDate”;

This code simply sets the DataSource and DisplayMember properties to the

OrderDate column of the dvOrderDates DataView.

4. Press F5 to run the application, and then click the Complex button.

The OrderDates list box displays the dates for the product selected in the

Products list box.

5. Select a different product to confirm that the dates that are displayed

change.

6. Close the application.

Visual C# .NET

1. In the form designer, double-click the Complex button.

Visual Studio opens the code editor and adds the Click event handler for the

btnComplex button.

2. Add the following code to the event handler:

3. Me.lbOrderDates.DataSource = Me.dvOrderDates;

Me.lbOrderDates.DisplayMember = “OrderDate”;

Microsoft ADO.Net – Step by Step 225

This code simply sets the DataSource and DisplayMember properties to the

OrderDate column of the dvOrderDates DataView.

4. Press F5 to run the application, and then click the Complex button.

The OrderDates list box displays the dates for the product selected in the

Products list box.

5. Select a different product to confirm that the dates that are displayed

change.

6. Close the application.

Using the BindingContext Object

As we have seen, the BindingContext object is the highest level object in the binding

hierarchy and manages the BindingManagerBase objects that control the interaction

between a data source and the controls bound to it.

The BindingContext object doesn’t expose any useful methods or events, and has only a

single property, as shown in Table 10-1. The Item property is used to index into the

BindingManagerBase collection contained in the BindingContext object. The first version,

Microsoft ADO.Net – Step by Step 226

which uses only the data source as a parameter, is used if no navigation path is

required. For example, if a DataTable is specified as the data source for a DataGrid, you

could use the following syntax to retrieve the CurrencyManager that controls that

binding:

Me.myDG.DataSource = Me.myDataSet.myTable

myCurrencyManager = Me.BindingContext(me.myDataSet.myTable)

The second version of the Item property allows the specification of the navigation path.

However, the navigation path provided here must resolve to a list, not a single property.

For example, if a text box is bound to the Description column of a DataTable, the

following syntax would be used to retrieve the CurrencyManager that controls the

binding:

Me.myText.DataBindings.Add(“Text”,Me.myDataSet,”myTable.Description”)

myCurrencyManager = Me.BindingContext(Me.myDataSet.myTable)

Table 10-1: BindingContext Properties

Property Description

Item(DataSource) Returns the BindingManagerBase object

associated with the specified DataSource

Item(DataSource,

DataMember)

Returns the BindingManagerBase object

associated with the specified DataSource

and DataMember, where the DataMember

is a table or relation

Using the CurrencyManager Object

The CurrencyManager object is fundamental to the Windows Forms data-binding

architecture. Through its properties, methods, and events, the CurrencyManager object

manages the link between a data source and the controls that display data from that

source.

CurrencyManager Properties

The properties exposed by the CurrencyManager are shown in Table 10-2. With the

exception of the Position property, they are all read-only.

Table 10-2: CurrencyManager Properties

Property Description

Bindings The collection

of Binding

objects being

managed by

the

CurrencyMana

ger

Count The number of

rows managed

by the

CurrencyMana

ger

Current The value of

the current

object in the

data source

Microsoft ADO.Net – Step by Step 227

Table 10-2: CurrencyManager Properties

Property Description

List The list

managed by

the

CurrencyMana

ger

Position Gets or sets

the current

item in the list

managed by

the

CurrencyMana

ger

The Bindings and List properties define the relationship between the data source and the

controls bound to it. The Bindings property, which returns a BindingsCollection object,

contains the Binding object for each individual control property that is bound to the data

source. We’ll examine the Binding object later in this chapter.

The List property returns a reference to the data source that is managed by the

CurrencyManager. The List property returns a reference to the IList interface. To treat

the data source as its native type in code, you must explicitly cast it to that type.

As might be expected, the Count property returns the number of rows in the list managed

by the CurrencyManager. Unlike some other environments, the Count property is

immediately available—it is not necessary to move to the end of the list before the Count

property is set.

The Current property returns the value of the current row in the data source as an object.

Like the List property, if you want to treat the value returned by Current as its native type,

you must explicitly cast it.

Remember that the Current property is read-only. To change the current row in the data

source, you must use the Position property, which is the only property exposed by the

CurrencyManager that is not read-only. The Position property is an integer that

represents the zero-based index into the List property.

Use CurrencyManager Read-Only Properties

Visual Basic .NET

1. In the code editor, select btnReadOnly in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the method:

3. Dim strMsg As String

4. Dim cm As System.Windows.Forms.CurrencyManager

5. Dim dsrc As System.Data.DataView

6.

7. cm = Me.BindingContext(Me.dsMaster1, “Categories”)

8. dsrc = CType(cm.List, System.Data.DataView)

9.

10. strMsg = “There are ” & cm.Count.ToString & ” rows in ”

11. strMsg += dsrc.Table.TableName.ToString & “.”

Microsoft ADO.Net – Step by Step 228

12. strMsg += vbCrLf & “There are ” & cm.Bindings.Count.ToString

13. strMsg += ” controls bound to it.”

MessageBox.Show(strMsg)

The first three lines declare some local variables. The fourth line sets the

variable cm to the CurrencyManager for the Categories DataTable, while the

next line assigns the variable dsrc to the data source referenced by the List

property.

Note that the value returned by List is explicitly cast to a DataView.

(Remember that although Categories is a DataTable, data binding always

occurs to the default view.)

The remaining lines display the Count and Bindings.Count properties in a

message box.

14. Press F5 to run the application.

15. Click the Read-Only button.

The application displays the CurrencyManager properties, showing two bound

controls.

16. Dismiss the dialog box, and then click the Simple button.

The application adds the binding for the Description control.

17. Click the Read-Only button.

The application displays the CurrencyManager properties, showing three

bound controls.

Microsoft ADO.Net – Step by Step 229

18. Close the application.

Visual C# .NET

1. In the form designer, double-click the Read-Only button.

Visual Studio adds the event handler to the code window.

2. Add the following code to the procedure:

3. string strMsg;

4. System.Windows.Forms.CurrencyManager cm;

5. System.Data.DataView dsrc;

6.

7. cm = (System.Windows.Forms.CurrencyManager)

8. this.BindingContext[this.dsMaster1, “Categories”];

9. dsrc = (System.Data.DataView) cm.List;

10.

11. strMsg = “There are ” + cm.Count.ToString() + ” rows in “;

12. strMsg += dsrc.Table.TableName.ToString() + “.”;

13. strMsg += “\nThere are ” + cm.Bindings.Count.ToString();

14. strMsg += ” controls bound to it.”;

MessageBox.Show(strMsg);

The first three lines declare some local variables. The fourth line sets the

variable cm to the CurrencyManager for the Categories DataTable, while the

next line assigns the variable dsrc to the data source referenced by the List

property.

Note that the value returned by List is explicitly cast to a DataView.

(Remember that although Categories is a DataTable, data binding always

occurs to the default view.)

The remaining lines display the Count and Bindings.Count properties in a

message box.

15. Press F5 to run the application.

16. Click the Read-Only button.

The application displays the CurrencyManager properties, showing two bound

controls.

Microsoft ADO.Net – Step by Step 230

17. Dismiss the dialog box, and then click the Simple button.

The application adds the binding for the Description control.

18. Click the Read-Only button.

The application displays the CurrencyManager properties, showing three

bound controls.

19. Close the application.

Use the Position Property

Visual Basic .NET

1. Open the region labeled ‘Navigation Buttons.’

2. Add the following code to the btnFirst_Click event handler:

3. Me.BindingContext(Me.dsMaster1, “Categories”).Position = 0

UpdateDisplay()

This code sets the Position property of the CurrencyManager for the Categories

DataTable to the beginning (remember that Position is a zero-based

index), and then calls the UpdateDisplay function. UpdateDisplay, which is

Microsoft ADO.Net – Step by Step 231

contained in the Utility Functions region, simply displays ‘Category x of y’ in

the text box at the bottom of the form.

4. Add the following code to the btnPrevious_Click event handler:

5. With Me.BindingContext(Me.dsMaster1, “Categories”)

6. If .Position = 0 Then

7. Beep()

8. Else

9. .Position -= 1

10. UpdateDisplay()

11. End If

12. End With

This code uses Microsoft Visual Basic’s With … End With structure to simplify

the reference to the CurrencyManager. Note that it checks to see if the

Position property is already set at the beginning of the file before

decrementing the value. The Position property does not throw an exception if

it is set outside the bounds of the list.

13. The remaining navigation code is already there, so press F5 to run

the application.

14. Use the navigation buttons to move through the display.

15. Close the application.

Visual C# .NET

1. Open the region labeled ‘Navigation Buttons.’

2. Add the following code to the btnFirst_Click event handler:

3. this.BindingContext[this.dsMaster1, “Categories”].Position = 0;

UpdateDisplay();

This code sets the Position property of the CurrencyManager for the Categories

DataTable to the beginning (remember that Position is a zero-based

index), and then calls the UpdateDisplay function. UpdateDisplay, which is

contained in the Utility Functions region, simply displays ‘Category x of y’ in

the text box at the bottom of the form.

4. Add the following code to the btnPrevious_Click event handler:

5. System.Windows.Forms.BindingManagerBase bmb;

6. bmb = (System.Windows.Forms.BindingManagerBase)

7. this.BindingContext[this.dsMaster1, “Categories”];

8.

9. bmb.Position -= 1;

UpdateDisplay();

10. The remaining navigation code is already there, so press F5 to run the

application.

11. Use the navigation buttons to move through the display.

12. Close the application.

CurrencyManager Methods

The public methods exposed by the CurrencyManager object are shown in Table 10-3.

Table 10-3: CurrencyManager Methods

Method Description

AddNew Adds a new

item to the

underlying

list

Microsoft ADO.Net – Step by Step 232

Table 10-3: CurrencyManager Methods

Method Description

CancelCurrentEdit Cancels the

current edit

operation

EndCurrentEdit Commits the

current edit

operation

Refresh Redisplays

the contents

of bound

controls

RemoveAt(Index) Removes

the item at

the position

specified by

Index in the

underlying

list

ResumeBinding Resumes

data binding

and data

validation

after the

SuspendBin

ding method

has been

called

SuspendBinding Temporarily

suspends

data binding

and data

validation

The data editing methods AddNew and RemoveAt, which add and remove items from

the data source, along with the CancelCurrentEdit and EndCurrentEdit methods, are for

use only within complex-bound controls. Unless you are creating a custom version of a

complex-bound control, use the DataView’s or DataRowView’s equivalent methods.

Roadmap We’ll examine the SuspendBinding and ResumeBinding

methods in Chapter 11.

The SuspendBinding and ResumeBinding methods allow binding (and hence data

validation) to be temporarily suspended. As we’ll see in Chapter 11, these methods are

typically used when data validation requires that values be entered into multiple fields

before they are validated.

The Refresh method is used only with data sources that don’t support change

notification, such as collections and arrays.

CurrencyManager Events

The events exposed by the CurrencyManager are shown in Table 10-4.

Table 10-4: CurrencyManager Events

Event Description

CurrentChanged Occurs

when the

bound value

changes

Microsoft ADO.Net – Step by Step 233

Table 10-4: CurrencyManager Events

Event Description

ItemChanged Occurs

when the

current item

has

changed

PositionChanged Occurs

when the

Position

property has

changed

The CurrentChanged and PositionChanged events both occur whenever the current row

in the CurrencyManager’s list changes. The difference is the event arguments passed

into the event—PositionChanged receives the standard System.EventArgs, while

ItemChanged receives an argument of the type ItemChangedEventArgs, which includes

an Index property.

The ItemChanged event occurs when the underlying data is changed. Under most

circumstances, when working with ADO.NET objects, you will use the DataRow or

DataColumn Changed and Changing events because they provide greater flexibility, but

there is nothing to prevent responding to the CurrencyManager’s ItemChanged event if it

is more convenient.

Respond to an ItemChanged Event

Visual Basic .NET

1. Add the following event handler to the code editor:

2. Private Sub Position_Changed(ByVal sender As System.Object,

_

3. ByVal e As System.EventArgs)

4. Dim strMsg As String

5.

6. strMsg = “Row ” & (Me.BindingContext(Me.dsMaster1, _

7. “Categories”).Position + 1).ToString

8. MessageBox.Show(strMsg)

End Sub

The code simply displays the current row number in a message box.

9. Expand the Region labeled Windows Form Designer generated code,

and add the following code to the end of the New sub to connect the

event handler to the PositionChanged event:

AddHandler Me.BindingContext(dsMaster1, Categories”).PositionChanged,

_ AddressOf Me.Position_Changed

10. Press F5 to run the application, and then click the Next button (‘>’).

The application displays a message box showing the new row number.

Microsoft ADO.Net – Step by Step 234

11. Close the application.

Visual C# .NET

1. Add the following event handler to the code editor:

2. private void Position_Changed(object sender, System.EventArgs

e)

3. {

4. string strMsg;

5.

6. strMsg = “Row ” + (this.BindingContext[this.dsMaster1,

7. “Categories”].Position + 1).ToString();

8. MessageBox.Show(strMsg);

}

The code simply displays the current row number in a message box.

9. Add the code to bind the event handler to the bottom of the

frmBindings() sub:

10. this.BindingContext[this.dsMaster1,

“Categories”].PositionChanged

+= new EventHandler(this.Position_Changed);

11. Press F5 to run the application, and then click the Next button (‘>’).

The application displays a message box showing the new row number.

Microsoft ADO.Net – Step by Step 235

12. Close the application.

Using the Binding Object

The Binding object represents the link between a simple-bound control property and the

CurrencyManager. The control’s DataBindings collection contains a Binding object for

each bound property.

Binding Object Properties

The properties exposed by the Binding object are shown in Table 10-5. All of the

properties are read-only.

Table 10-5: Binding Properties

Property Description

BindingManagerBase The

BindingManagerB

ase that manages

this Binding object

BindingMemberInfo Returns

information

regarding this

Binding object

based on the

DataMember

specified in its

constructor

Control The control being

bound

DataSource The data source

for the binding

IsBinding Indicates whether

the binding is

active

PropertyName The control’s

data-bound

property

Microsoft ADO.Net – Step by Step 236

The BindingManagerBase, Control, and PropertyName properties define the data

binding. The BindingManagerBase property returns the CurrencyManager or

PropertyManager that manages the Binding object, while the Control and PropertyName

properties specify the control property containing the data.

The IsBinding property indicates whether the binding is active. It returns True unless

SuspendBinding has been evoked.

The DataSource property returns the data source to which the control property is bound

as an object. Note that it returns the data source only, not the navigation path. To

retrieve the Binding object’s navigation path, you must use the BindingMemberInfo

property, a complex object whose fields are shown in Table 10-6.

Table 10-6: BindingMemberInfo Properties

Field Description

BindingField The data

source

property

specified by

the Binding

object’s

navigation

path

BindingMember The

complete

navigation

path of the

Binding

object

BindingPath The

navigation

path up to,

but not

including,

the data

source

property,

specified by

the Binding

object’s

navigation

path

The BindingMember field of the BindingMemberInfo property represents the entire

navigation path of the binding, while the BindingField field represents only the final field.

The BindingPath field represents everything up to the BindingField. For example, given

the navigation path ‘Categories.CategoryProducts.ProductID,’ the BindingField is

‘ProductID,’ while the BindingPath is ‘Categories.CategoryProducts.’ Note that all three

properties return a string value, not an object reference.

Use the BindingMemberInfo Property

Visual Basic .NET

1. In the code editor, select btnBindings in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler template to the code.

2. Add the following code to the method:

3. Dim strMsg As String

4. Dim bmo As System.Windows.Forms.BindingMemberInfo

Microsoft ADO.Net – Step by Step 237

5.

6. bmo = Me.tbCategoryID.DataBindings(0).BindingMemberInfo

7. strMsg = “BindingMember: ” + bmo.BindingMember.ToString

8. strMsg += vbCrLf & “BindingPath: ” + _

9. bmo.BindingPath.ToString

10. strMsg += vbCrLf & “BindingField: ” + _

11. bmo.BindingField.ToString

MessageBox.Show(strMsg)

The first two lines declare local variables to be used in the method. The third

line assigns the BindingMemberInfo property of the first (and only) Binding

object in the tbCategoryID DataBindings collection to the bmo variable. The

remaining lines display the BindingMember, BindingPath, and BindingField

properties in a message box.

12. Press F5 to run the application, and then click the

BindingMemberInfo button.

The application displays the BindingMemberInfo fields in a dialog box.

13. Close the application.

Visual C# .NET

1. In the form designer, double-click the Bindings button.

Visual Studio adds the event handler to the code window.

2. Add the following code to the procedure:

3. string strMsg;

4. System.Windows.Forms.BindingMemberInfo bmo;

5.

6. bmo = this.tbCategoryID.DataBindings[0].BindingMemberInfo;

7.

8. strMsg = “BindingMember: ” + bmo.BindingMember.ToString();

9. strMsg += “\nBindingPath: ” + bmo.BindingPath.ToString();

10. strMsg += “\nBindingField: ” + bmo.BindingField.ToString();

MessageBox.Show(strMsg);

11. Press F5 to run the application, and then click the Bindings button.

Microsoft ADO.Net – Step by Step 238

The application displays the BindingMemberInfo fields in a dialog box.

12. Close the application.

Binding Object Events

The events exposed by the Binding object are shown in Table 10-7. The Format and

Parse events are used to control the way data is displayed to the user. We’ll examine

both of these events in detail in Chapter 11.

Table 10-7: Binding Events

Event Description

Format Occurs when data is pushed from the data source to the control

or pulled from the control to the data source

Parse Occurs when data is pulled from the control to the data source

Roadmap We’ll examine the Format and Parse events in Chapter 11.

Chapter 10 Quick Reference

To Do this

Simple-bind

control properties

at run time

Create a new Binding object, and add it to the control’s

DataBindings collection:

newBinding = New Binding(<propertyString>,

<dataSource>, <navigationPath>)

myControl.DataBindings.Add(newBinding)

Complex-bind

control properties

at run time

Set the DataSource and DisplayMember properties:

myControl.DataSource = myDataSource

myControl.DisplayMember = “field”

Use

CurrencyManage

r properties

Obtain a reference to the CurrencyManager by specifying

the data source and navigation path, and then reference

its properties in the usual way:

myCM = Me.BindingContext(<dataSource>,

<path>)

MessageBox.Show(myCM.Count.ToString())

Microsoft ADO.Net – Step by Step 239

Chapter 11: Using ADO.NET in Windows Forms

Overview

In this chapter, you’ll learn how to:

§ Format data using the Format and Parse events

§ Use specialized controls to simplify data entry

§ Use data relations to display related data

§ Find rows based on a DataSet’s Sort column

§ Find rows based on other criteria

§ Work with data change events

§ Work with validation events

§ Use the ErrorProvider component

In the previous chapter, we examined the objects that support Microsoft ADO.NET data

binding. In this chapter, we’ll explore using ADO.NET and Windows Forms to perform

some common tasks.

Formatting Data

The Binding object exposes two events, Format and Parse, which support formatting

data for an application. The Format event occurs whenever data is pushed from the data

source to the control, and when it is pulled from the control back to the data source, as

shown in the figure below.

The Format event is used to translate the data from its native format to the format you

want to display to the user, while the Parse event is used to translate it back to its

original format.

Both events receive a ConvertEventArgs argument, which has the properties shown in

Table 11-1. The Value property contains the actual data. When the event is triggered,

this property will contain the original data in its original format. To change the formatting,

you set this value to the new data or format within the event handler. The DesiredType

property is used when you are changing the data type of the value.

Table 11-1: ConvertEventArgs Properties

Property Description

DesiredType The data

type of the

desired

value

Value The data

value

Using the Format Event

Because the Format event occurs both when data is being pushed from the data source

and when it is pulled from the control, you must be sure you know which action is taking

place before performing any action. If you change the data type of the value, you can

use the DesiredType property to perform this check.

However, if the data type remains the same, you must use a method that is external to

the event to determine which way the data is being moved. Setting the Tag property of

Microsoft ADO.Net – Step by Step 240

the control is an easy way to manage this. If you’re using the Tag property for another

purpose, you can use a form-level variable or determine the direction from the value

itself.

Change the Format of Data Using the Format Event

Visual Basic .NET

1. In Microsoft Visual Studio .NET, open the UsingWindows project from

the Start page or by using the File menu.

2. Double-click the Master.vb form.

Visual Studio displays the form in the form designer.

3. Press F7 to display the code editor.

4. Add the following event handler to the code:

5. Private Sub FormatName (ByVal sender As Object, ByVal e As _

6. ConvertEventArgs)

7. If Me.tbCategoryName.Tag <> “PARSE” Then

8. e.Value = CType(e.Value, String).ToUpper

9. End If

10. Me.tbCategoryName.Tag = “FORMAT”

11. MessageBox.Show(e.Value, “Format”)

End Sub

This code first checks the tbCategoryName text box’s Tag property to see if

the value is “PARSE.” If it isn’t “PARSE,” it translates the Value property of e

to uppercase. It then sets the Tag property to “FORMAT” and displays a

message box showing the Value property.

12. Expand the Region labeled Windows Form Designer generated

code.

13. In the New sub, after the call to UpdateDisplay(), add the code to

call the procedure:

AddHandler Me.tbCategoryName.DataBindings(0).Format, _ AddressOf

Me.FormatName

This line adds the handler to the first (and only) Binding object in the

tbCategoryName text box’s DataBindings collection.

14. Press F5 to run the application. The message box is displayed twice

before the application’s form is displayed, once when the control is

bound and a second time when the data is first pushed to the

control.

15. Close both message boxes.

Microsoft ADO.Net – Step by Step 241

16. Click the Next button (“>”).

The application displays the formatted CategoryName for the next row.

17. Close the message box.

18. Close the application.

Visual C# .NET

1. In Microsoft Visual Studio .NET, open the UsingWindows project from

the Start page or by using the File menu.

2. Double-click the Master.cs form.

Microsoft ADO.Net – Step by Step 242

Visual Studio displays the form in the form designer.

3. Press F7 to display the code editor.

4. Add the following event handler to the bottom of the class definition:

5. private void FormatName(object sender, ConvertEventArgs e)

6. {

7. string eStr = (string) e.Value;

8.

9. if ((string) this.tbCategoryName.Tag != “PARSE”)

10. e.Value = eStr.ToUpper();

11. this.tbCategoryName.Tag = “FORMAT”;

12. MessageBox.Show((string)e.Value, “Format”);

}

This code first checks the tbCategoryName text box’s Tag property to see if

the value is “PARSE.” If it isn’t “PARSE,” it translates the Value property of e

to uppercase. It then sets the Tag property to “FORMAT” and displays a

message box showing the Value property.

13. In the frmMaster sub, after the call to UpdateDisplay(), add the code

to call the procedure:

14. this.tbCategoryName.DataBindings[0].Format +=

new ConvertEventHandler(this.FormatName);

This line adds the handler to the first (and only) Binding object in the

tbCategoryName text box’s DataBindings collection.

15. Press F5 to run the application. The message box is displayed twice

before the application’s form is displayed, once when the control is

bound and a second time when the data is first pushed to the

control.

16. Close both message boxes.

Microsoft ADO.Net – Step by Step 243

17. Click the Next button (“>”).

The application displays the formatted CategoryName for the next row.

18. Close the message box.

19. Close the application.

Microsoft ADO.Net – Step by Step 244

Using the Parse Event

As we have seen, the Parse event occurs when data is being pulled from a control back

to the data source, and it is typically used to “un-format” data that has been customized

for display.

Because Parse is called only once, this “un-formatting” operation should always happen,

unlike the Format operation, which should take place only when data is being pushed to

the control. However, you do need to be careful to set up any variables or properties

required to make sure that the Format event, which will always be called after Parse,

doesn’t reformat data before it is submitted to the data source.

Restore the Original Format of Data Using the Parse Event

Visual Basic .NET

1. Add the following procedure to the bottom of the code editor:

2. Private Sub ParseName(ByVal sender As Object, ByVal e As _

ConvertEventArgs)

3. Me.tbCategoryName.Tag = “PARSE”

4. e.Value = CType(e.Value, String).ToLower

5. MessageBox.Show(e.Value, “Parse”)

End Sub

Note that because the Parse event occurs only when data is being pulled

from the control, there is no need to check the Tag property.

6. Add the following handler to the New sub, after the handler from the

previous exercise:

AddHandler Me.tbCategoryName.DataBindings(0).Parse, _ AddressOf

Me.ParseName

7. Press F5 to run the application, and close both of the preliminary

Format event message boxes.

8. Add a couple of spaces after “BEVERAGES,” and then click the Next

button (“>”).

The application displays the Parse message box.

9. Close the Parse message box, and then close the application.

Important The code for this book was checked with a pre-release version of

Visual Studio .NET (build 4997). A bug in that build interfered

with the click event firing if the project had both a Parse and

Format event handler and if either of these displayed a

MessageBox.

We fully expect that this will be fixed before Visual Studio .NET is

released; however, if the project re-displays the Beverages

category, please refer to the Microsoft Press Web site for further

Microsoft ADO.Net – Step by Step 245

information.

10. Comment out the two AddHandler statements in the New sub.

(Otherwise, the message boxes will get irritating as we work through

the remaining exercises.)

Visual C# .NET

1. Add the following procedure to the bottom of the class file:

2. private void ParseName(object sender, ConvertEventArgs e)

3. {

4. string eStr = (string) e.Value;

5.

6. this.tbCategoryName.Tag = “PARSE”;

7. e.Value = eStr.ToLower();

8. MessageBox.Show((string)e.Value, “Parse”);

}

Note that because the Parse event occurs only when data is being pulled

from the control, there is no need to check the Tag property.

9. Add the following handler to the New sub, after the handler from the

previous exercise:

this.tbCategoryName.DataBindings[0].Parse += new

ConvertEventHandler(this.ParseName);

10. Press F5 to run the application, and close both of the preliminary

Format event message boxes.

11. Add a couple of spaces after “BEVERAGES,” and then click the

Next button (“>”).

The application displays the Parse message box.

12. Close the Parse message box, and then close the application.

Important When a message box is displayed, it stops code in the

application from executing until the user clicks one of the

message box buttons. Stopping the execution of code with a

message box can cause events to fire incorrectly. For example,

the Parse and Format event handlers for this sample include a

call to MessageBox.Show. When you run the sample, add a

couple of spaces in the Name text box, and click the Next button,

you might notice that the Click event for the Next button does not

fire. To ensure that the events for this sample fire correctly, you

can comment out the calls to MessageBox.Show or replace the

calls to MessageBox.Show with Console.WriteLine or

Debug.WriteLine. Console.WriteLine or Debug.WriteLine won’t

stop code from executing and will output specified text to the

Visual Studio .NET Output window so that you can see how the

events are firing.

Microsoft ADO.Net – Step by Step 246

13. Comment out the two statements that add the

ConvertEventHandlers in the frmMaster sub. (Otherwise, the

message boxes will get irritating as we work through the remaining

exercises.)

Displaying Data in Windows Controls

The Microsoft .NET Framework supports a wide variety of controls for use on Windows

forms, and as we’ve seen, any form property can be bound, directly or indirectly, to an

ADO.NET data source.

The details of each control are unfortunately outside the scope of this book, but in this

section, we’ll examine some specific techniques for data-binding.

Simplifying Data Entry

One of the reasons that so many controls are provided, of course, is to make data entry

simpler and more accurate. TextBox controls are always an easy choice, but the time

spent choosing and implementing controls that more closely match the way the user

thinks about the data will be richly rewarded.

To take a fairly simple example, it is certainly possible to use a ComboBox containing

True and False or Yes and No to represent Boolean values, but in most circumstances,

it’s far more effective to use the CheckBox control provided by the .NET Framework.

The Checked property of the CheckBox control, which determines whether the box is

selected, can be simple-bound either at design time by using the Properties window or at

run time in code by using standard techniques.

Use the CheckBox Control for Boolean Values

1. In the Solution Explorer, double-click Controls.vb (or Controls.cs, if

you’re using C#).

Visual Studio .NET opens the form in the form designer.

2. Select the Discontinued CheckBox control.

3. In the Properties window, expand the Data Bindings section (if

necessary).

4. Select the Checked property. In the drop-down list, expand

dsMaster1,and then expand ProductsExtended and select

Discontinued.

5. Press F5 to run the application.

6. Click the Controls button.

The application displays the Controls form.

Microsoft ADO.Net – Step by Step 247

7. Move through the DataTable by pressing the Next button (“>”),

confirming that only discontinued products have the field checked.

8. Close the Controls window.

9. Close the application.

In order to simplify the database schema, many tables use artificial keys—an identity

value of some type rather than a key derived from the entity’s attributes. These artificial

keys are convenient, but they don’t typically have any meaning for users. When working

with the primary table, the artificial key can often be hidden from users or simply ignored

by them. With foreign keys, however, this is rarely the case.

Fortunately, the .NET Framework controls that inherit from the ListControl class,

including both ListBox controls and ComboBox controls, make it easy to bind the control

to one column while displaying another, even a column in a different table.

The technique is reasonably straightforward. First set the DataSource and

DisplayMember properties of the list control to the user-friendly table and column. Under

most circumstances, this won’t be the table that the form is updating. Then, to set the

data binding, set the ValueMember property to the key field in the form being updated,

and finally create a Binding object linking the SelectedValue property to the field to be

updated.

For example, given the database schema shown in the figure below, if you were creating

a form to update the Relatives table, you would typically use a ComboBox control to

represent the Relationship type rather than forcing the user to remember that Type 1

means Sister, Type 2 means Father, and so on.

To implement this in the .NET Framework, you would set the ComboBox control’s

DisplayMember property to RelationshipTypes.Relationship, and then set its

ValueMember property to RelationshipTypes.RelationshipID. With these settings, the

ComboBox control will display Sister but return a SelectedValue of 1.

Once the properties have been set, either in the Properties window or in code, you must

then add a Binding object to the ComboBox control to link the SelectedValue to the

Relationship field in the Relatives table. Because SelectedValue isn’t available for databinding

at design time, you must do this in code:

[VB]

Me.RelationshipType.DataBindings.Add(“SelectedValue”, myDS, _

“Relatives.Relationship”)

Microsoft ADO.Net – Step by Step 248

[C#]

this.RelationshipType.DataBindings.Add(“SelectedValue”, myDS,

“Relatives.Relationship”);

Display Full Names in a ComboBox Control

Visual Basic .NET

1. In the form designer, select the Category combo box (cbCategory) on

the Controls form.

2. In the Properties window, select the DataSource property.

3. In the drop-down list, select dsMaster1.

4. In the Properties window, select the DisplayMember property.

5. In the drop-down list, expand Categories, and then select

CategoryName.

6. In the Properties window, select the ValueMember property.

7. In the drop-down list, expand Categories, and then select CategoryID.

8. Press F7 to open the code editor window.

9. Expand the Region labeled Windows Form Designer generated code.

10. Add the following code after the call to UpdateDisplay in the New

sub:

Me.cbCategory.DataBindings.Add(“SelectedValue”, Me.dsMaster1, _

“ProductsExtended.CategoryID”)

This code binds the ValueMember property of the control to the CategoryID

column of the ProductsExtended DataTable.

11. Press F5 to run the application.

12. Click the Controls button.

The application displays the Controls form and populates the Category combo

box.

13. Close the Controls form and the application.

Visual C# .NET

1. In the form designer, select the Category combo box (cbCategory) on

the Controls form.

2. In the Properties window, select the DataSource property.

3. In the drop-down list, select dsMaster1.

4. In the Properties window, select the DisplayMember property.

5. In the drop-down list, expand Categories, and then select

CategoryName.

6. In the Properties window, select the ValueMember property.

7. In the drop-down list, expand Categories, and then select CategoryID.

8. Press F7 to open the code editor window.

9. Add the following code after the call to UpdateDisplay in the

frmControls sub:

10. this.cbCategory.DataBindings.Add(“SelectedValue”,

Microsoft ADO.Net – Step by Step 249

this.dsMaster1, “ProductsExtended.CategoryID”);

This code binds the ValueMember property of the control to the CategoryID

column of the ProductsExtended DataTable.

11. Press F5 to run the application.

12. Click the Controls button.

The application displays the Controls form and populates the Category combo

box.

13. Close the Controls form and the application.

Numeric data is presented to the user in a text box. Unfortunately, the .NET Framework

version of the control doesn’t provide any method to constrain data entry to numeric

characters. One option is to use the NumericUpDown control. The user can type directly

into this control (numeric characters only) or use the up and down arrows to set the

value.

The NumericUpDown control can be simple-bound at design time or at run time by using

the standard techniques, and it allows a fine degree of control over the format of the

numbers—you can specify the number of decimal places, the increment by which the

value changes when the user clicks the up and down arrows, and the minimum and

maximum values.

Use NumericUpDown Controls

1. In the form designer, select the UnitPrice NumericUpDown control

(udPrice).

2. In the Properties window, expand the Data Bindings section, if

necessary, and then select the Value property.

3. In the drop-down list box, expand dsMaster, expand

ProductsExtended, and then select UnitPrice.

4. Press F5 to run the application.

5. Click the Controls button.

The application displays the Controls form and populates the UnitPrice

NumericUpDown control.

Microsoft ADO.Net – Step by Step 250

6. Close the Controls form and the application.

7. Close the Controls form designer and code editor.

Working with DataRelations

The data model implemented by ADO.NET, with its ability to specify multiple DataTables

and the relationships between them, makes it easy to represent relationships of arbitrary

depth on a single form.

By binding the control to a DataRelation rather than to a DataTable, the .NET Framework

will automatically handle synchronization of controls on a form.

Create a Nested ListBox

Visual Basic .NET

1. Select the code editor for Master.vb.

2. In the New sub, add the following data bindings below the two

commented AddHandler calls:

3. Me.lbOrders.DataSource = Me.dsMaster1

Me.lbOrders.DisplayMember = _

“Categories.CategoriesProducts.ProductOrders.OrderDate”

4. Press F5 to run the application.

Visual Studio displays the application’s main form and populates the Orders

list box.

5. Select different products in the Products list box.

The application displays the date on which each Product was ordered.

Microsoft ADO.Net – Step by Step 251

6. Close the application.

Visual C# .NET

1. Select the code editor for Master.cs.

2. In the frmMaster sub, add the following data bindings below the two

commented ConvertEventHandlers:

3. this.lbOrders.DataSource = this.dsMaster1;

4. this.lbOrders.DisplayMember =

5. “Categories.CategoriesProducts.ProductOrders.OrderDate”;

6. Press F5 to run the application.

Visual Studio displays the application’s main form and populates the Orders

list box.

7. Select different products in the Products list box.

The application displays the date on which each Product was ordered.

Microsoft ADO.Net – Step by Step 252

8. Close the application.

In the previous exercise, we used two ListBox controls to represent a hierar-chical

relationship in the data. The DataGrid control also supports the display of hierarchical

data, and it has the advantage of allowing multiple columns from the data source to be

displayed simultaneously. Unfortunately, because it can display only a single table at a

time, the DataGrid control forces the user to manually navigate the hierarchy and some

users find this confusing.

Note The DataGrid is a complex control, and details of its uses are

outside the scope of this book. The following exercise walks you

through the process of displaying two related DataTables in the

DataGrid control. For more information on using this control, refer

to the Visual Studio and .NET Framework documentation.

Displaying Hierarchical Data Using the DataGrid

1. In the Solution Explorer, double-click DataGrid.vb (or DataGrid.cs, if

you are using C#).

Visual Studio opens the form in the form designer.

2. Select the dgProductOrders DataGrid.

3. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

4. Select the DataMember property, expand the drop-down list, expand

Categories, and then select CategoriesProducts.

5. Click the Ellipsis button after the TableStyles property.

Visual Studio displays the DataGridTableStyle Collection Editor.

Microsoft ADO.Net – Step by Step 253

6. Click the Add button.

Visual Studio adds a DataGridTableStyle.

7. Change the Name property of the DataGridTableStyle to tsProducts.

8. Select the MappingName property, expand the drop-down list, expand

Categories, and then select CategoriesProducts.

Microsoft ADO.Net – Step by Step 254

9. Click the Add button again.

Visual Studio adds a second DataGridTableStyle.

10. Change the Name property to tsOrders and the MappingName

property to Categories.CategoriesProducts.ProductOrders.

11. Click OK to close the editor.

12. Press F5 to run the application, and then click the DataGrid button.

The application displays the DataGrid form.

Microsoft ADO.Net – Step by Step 255

13. Expand one of the rows in the DataGrid.

The application displays the name of the related table.

14. Select ProductOrders.

The application displays the selected orders.

Microsoft ADO.Net – Step by Step 256

15. Click the Back button.

The application returns to the Products display.

16. Close the window, and close the application.

17. Close the DataGrid.vb (or DataGrid.cs, if you’re using C#) form.

The DataGrid control is fairly easy to bind to multiple DataTables, but because it can

display only a single table at any time, it can be confusing for the user. The TreeView

control can also represent hierarchical data, and it does so in a way that often matches

the user’s expectations more closely.

Microsoft ADO.Net – Step by Step 257

Unfortunately, the TreeView control can’t be directly bound to a data source. Instead,

you must manually add the data by using the Add method of its Nodes collections. The

following exercise walks you through the process.

Displaying Hierarchical Data Using the TreeView

Visual Basic .NET

1. In the Solution Explorer, double-click TreeView.vb.

Visual Studio displays the form in the form designer.

2. Press F7 to display the code editor.

3. Add the following procedure to the bottom of the code editor:

4. Private Sub AddNodes(ByVal sender As Object, ByVal e As

EventArgs)

5. Dim dvCategory As System.Data.DataRowView

6. Dim arrProducts() As System.Data.DataRow

7. Dim currProduct As dsMaster.ProductsRow

8. Dim arrOrders() As System.Data.DataRow

9. Dim currOrder As dsMaster.OrderDatesRow

10. Dim root As System.Windows.Forms.TreeNode

11. With Me.tvProductOrders

12. .BeginUpdate()

13. .Nodes.Clear()

14.

15. dvCategory = _

16. Me.BindingContext(Me.dsMaster1,

“Categories”).Current

17. arrProducts = _

dvCategory.Row.GetChildRows(“CategoriesProducts”)

18. For Each currProduct In arrProducts

19. root = .Nodes.Add(currProduct.ProductName)

20. arrOrders =

currProduct.GetChildRows(“ProductOrders”)

21.

22. For Each currOrder In arrOrders

23. root.Nodes.Add(currOrder.OrderDate)

Microsoft ADO.Net – Step by Step 258

24. Next

25. Next currProduct

26.

27. .EndUpdate()

28. End With

End Sub

29. Expand the Region labeled Windows Form Designer generated

code.

30. Add the following code below the call to UpdateDisplay in the New

sub:

31. AddHandler Me.BindingContext(Me.dsMaster1, _

32. “Categories”).PositionChanged, AddressOf _

33. Me.AddNodes

AddNodes(Me, New System.EventArgs())

The first line links the AddNodes procedure to the PositionChanged event so

that it will be called each time the Category changes. The second line calls

the procedure directly to set up the initial display.

34. Press F5 to run the application, and then click the TreeView button.

Visual Studio displays the TreeView form.

35. Verify that the TreeView is updated correctly by clicking the Next

button (“>”) and expanding nodes.

36. Close the TreeView form and the application.

37. Close the TreeView form designer and code editor.

Visual C# .NET

1. In the Solution Explorer, double-click TreeView.cs.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 259

2. Press F7 to display the code editor.

3. Add the following procedure to the bottom of the code editor:

4. private void AddNodes(object sender, System.EventArgs e)

5. {

6. System.Data.DataRowView dvCategory;

7. System.Data.DataRow[] arrProducts;

8. System.Data.DataRow[] arrOrders;

9. System.Windows.Forms.TreeNode root;

10. System.Windows.Forms.TreeView tv;

11.

12. tv = this.tvProductOrders;

13.

14. tv.BeginUpdate();

15. tv.Nodes.Clear();

16.

17. dvCategory = (System.Data.DataRowView)

18. this.BindingContext[this.dsMaster1,

“Categories”].Current;

19. arrProducts =

dvCategory.Row.GetChildRows(“CategoriesProducts”);

20. foreach (dsMaster.ProductsRow currProduct in

arrProducts)

21. {

22. root = tv.Nodes.Add(currProduct.ProductName);

23. arrOrders =

currProduct.GetChildRows(“ProductOrders”);

24. foreach (dsMaster.OrderDatesRow currOrder in

arrOrders)

25. {

26.

root.Nodes.Add(currOrder.OrderDate.ToString());

27. }

28. }

Microsoft ADO.Net – Step by Step 260

29. tv.EndUpdate();

30. }

31. Add the following code below the call to UpdateDisplay in the

frmTreeView sub:

32. this.BindingContext[this.dsMaster1,

“Categories”].PositionChanged +=

33. new EventHandler(this.AddNodes);

34. System.EventArgs ea;

35. ea = new System.EventArgs();

AddNodes(this, ea);

The first line links the AddNodes procedure to the PositionChanged event so

that it will be called each time the Category changes. The remaining lines call

the procedure directly to set up the initial display.

36. Press F5 to run the application, and then click the TreeView button.

Visual Studio displays the TreeView form.

37. Verify that the TreeView is updated correctly by clicking the Next

button (“>”) and expanding nodes.

38. Close the TreeView form and the application.

39. Close the TreeView form designer and code editor.

Finding Data

Finding a specific row in a DataTable is a common application task. Unfor-tunately, the

BindingContext object, which controls the data displayed by the controls on a form,

doesn’t directly support a Find method. Instead, you must use either a DataView object

to find a row based on the current Sort key or use a DataTable object to find a row based

on more complex criteria.

Finding Sorted Rows

Using the DataView’s Find method is straightforward, but it can be used only to find a

row based on the row(s) currently specified in the Sort property. If your controls are

bound to a DataView, you can reference the object directly. If you bound the controls to a

DataTable, you can use the DefaultView property to obtain a reference without creating a

new object.

Once you have a reference to a DataView, you can use the Find method, which returns

the index of the row matching the specified criteria or -1 if no matching row is found. The

index of the row in the DataView will correspond directly to the same row’s index in the

BindingContext object, so it’s a simple matter of setting the BindingContext.Position

property to the value that is returned.

Microsoft ADO.Net – Step by Step 261

Find a Row Based on Its Sort Column

Visual Basic .NET

1. In the code editor for Master.vb, select btnFindCategory in the Control

Name combo box, and then select Click in the Event combo box.

Visual Studio adds an event handler to the code editor.

2. Add the following code to the event handler:

3. Dim fcForm As New frmFindCategory()

4. Dim dv As System.Data.DataView =

Me.dsMaster1.Categories.DefaultView

5. Dim id As Integer

6. Dim idx As Integer

7.

8. If fcForm.ShowDialog() = DialogResult.OK Then

9. If fcForm.GetID = 0 Then

10. Else

11. id = fcForm.GetID

12. idx = dv.Find(id)

13. If idx = -1 Then

14. MessageBox.Show(“Category ” + id.ToString + ”

not found”,

15. _ “Error”)

16. Else

17. Me.BindingContext(Me.dsMaster1, _

18. “Categories”).Position = idx

19. End If

20. End If

21. End If

22. fcForm.Dispose()

After declaring some variables and calling fcForm as a dialog box, the code

sets up an if … else statement to handle the two possible search criteria.

(We’ll complete the first section of the if statement in the following exercise.)

The variable id is set to the value of the GetID field on fcForm, and then the

code uses the Find method to locate the index of the row containing that field.

Find returns -1 if the row is not found, in which case the code displays an

error message. If the row is found, it is displayed in the Master form by setting

the BindingContext.Position property.

23. Press F5 to run the application, and click the Find Category button.

Microsoft ADO.Net – Step by Step 262

24. Type 3 in the ID field, and then click Find.

The application displays Category 3 on the Master form.

25. Close the application.

Visual C# .NET

1. In the form designer, double-click the btnFindCategory button on the

Master form.

Visual Studio adds an event handler to the code editor.

2. Add the following code to the event handler:

3. frmFindCategory fcForm = new frmFindCategory();

4. System.Data.DataView dv =

this.dsMaster1.Categories.DefaultView;

5. int id;

6. int idx;

7.

8. if (fcForm.ShowDialog() == DialogResult.OK)

9. if (fcForm.GetID == 0)

10. {

11. }

12. else

13. {

14. id = fcForm.GetID;

15. idx = dv.Find(id);

Microsoft ADO.Net – Step by Step 263

16. if (idx == -1)

17. MessageBox.Show(“Category ” + id.ToString(),

“Error”);

18. else

19. this.BindingContext[this.dsMaster1,

20. “Categories”].Position = idx;

21. }

22. fcForm.Dispose();

After declaring some variables and calling fcForm as a dialog box, the code

sets up an if … else statement to handle the two possible search criteria.

(We’ll complete the first section of the if statement in the following exercise.)

The variable id is set to the value of the GetID field on fcForm, and then the

code uses the Find method to locate the index of the row containing that field.

Find returns -1 if the row is not found, in which case the code displays an

error message. If the row is found, it is displayed in the Master form by setting

the BindingContext.Position property.

23. Press F5 to run the application, and click the Find Category button.

24. Type 3 in the ID field, and then click Find.

The application displays Category 3 on the Master form.

25. Close the application.

Finding Rows Based on Other Criteria

The DataView object’s Find method is easy to use but limited in scope. If you need to

find a row based on complex criteria, or on a single column other than the one on which

the data is sorted, you must use the DataTable’s Select method.

As we saw in Chapter 7, the Select method is easy to use, but positioning the

CurrencyManager to the correct row requires several steps. The process requires using

Microsoft ADO.Net – Step by Step 264

both the DataView and the DataTable object to perform the search, along with the

BindingContext object to display the results. In truth, the whole process is decidedly

awkward, but you’ll learn the steps by rote soon enough.

First you must execute the Select method with the required criteria against the

DataTable. Once the appropriate row is found, you obtain the Sort column value from the

array returned by the Select method and use that to perform a Find against the

DataView. Finally, you use the Position property of the BindingContext to display the

result.

Find a Row Based on an Unsorted Column

Visual Basic .NET

1. Add the following code after the line If fcForm.GetID = 0 in the

btnFindCategory_Click procedure we began in the previous

exercise:

2. Dim name As String

3. Dim dt As System.Data.DataTable = Me.dsMaster1.Categories

4. Dim dr() As System.Data.DataRow

5.

6. name = fcForm.GetName

7.

8. Try

9. dr = dt.Select(“CategoryName = ‘” & name & “‘”)

10. id = CType(dr(0),

dsMaster.CategoriesRow).CategoryID

11. idx = dv.Find(id)

12. Me.BindingContext(Me.dsMaster1,

“Categories”).Position = idx

13. Catch

14. MessageBox.Show(“Category ” + name + ” not

found”, “Error”)

End Try

This code uses the DataTable’s Select method to find the specified category

name. Select returns an array of rows, so the second line uses the CType

function to convert the first row of the array3—dr(0)—to a CategoriesRow,

and sets id to the CategoryID. It then finds the CategoryID in the DataView

and positions the Master form to the row by using the BindingContext.Position

property by using the same code from the previous exercise.

15. Press F5 to run the application, and then click the Find Category

button.

16. Type Condiments in the Name field, and then click Find.

The application displays the Condiments category in the Master form.

Microsoft ADO.Net – Step by Step 265

17. Close the application.

Visual C# .NET

1. Add the following code after the line If (fcForm.GetID == 0) in the

btnFindCategory_Click procedure we began in the previous

exercise:

2. string name;

3. System.Data.DataTable dt = this.dsMaster1.Categories;

4. dsMaster.CategoriesRow cr;

5. System.Data.DataRow[] dr;

6.

7. name = fcForm.GetName;

8.

9. try

10. {

11. dr = dt.Select(“CategoryName = ‘” + name + “‘”);

12. cr = (dsMaster.CategoriesRow) dr[0];

13. id = cr.CategoryID;

14. idx = dv.Find(id);

15. this.BindingContext[this.dsMaster1,

“Categories”].Position = idx;

16. }

17. catch

18. {

19. MessageBox.Show(“Category ” + name + ” not

found”, “Error”);

}

This code uses the DataTable’s Select method to find the specified category

name. Select returns an array of rows, so the second line uses the CType

function to convert the first row of the array—dr(0)—to a CategoriesRow, and

sets id to the CategoryID. It then finds the CategoryID in the DataView and

positions the Master form to the row by using the BindingContext.Position

property by using the same code from the previous exercise.

20. Press F5 to run the application, and then click the Find Category

button.

Microsoft ADO.Net – Step by Step 266

21. Type Condiments in the Name field, and then click Find.

The application displays the Condiments category in the Master form.

22. Close the application.

Validating Data in Windows Forms

The .NET Framework supports a number of techniques for validating data entry prior to

submitting it to a data source. First, as we’ve already seen, is the use of controls that

constrain the data entry to appropriate values.

After the data has been entered, the .NET Framework exposes a series of events at both

the control and data level to allow you to trap and manage problems.

Data Change Events

Data validation is most often implemented at the data source level. This tends to be

more efficient because the validation will occur regardless of which control or controls

are used to change the data.

As we saw in Chapter 7, the DataTable object exposes six events that can be used for

data validation. In order of occurrence, they are:

§ ColumnChanging

§ ColumnChanged

§ RowChanging

§ RowChanged

§ RowDeleting

§ RowDeleted

Note If a row is being deleted, only the RowDeleting and RowDeleted

events occur.

Microsoft ADO.Net – Step by Step 267

If you are using a Typed DataSet, you can create separate event handlers for each

column in a DataTable. If you’re using an Untyped DataSet, a single event handler must

handle all the columns in a single DataRow. You can use the Column property of the

DataColumnChangeArgs parameter, which is passed to the event to determine which

column is being changed.

Respond to a ColumnChanging Event

Visual Basic .NET

1. Add the following procedure to the bottom of the code editor:

2. Private Sub Categories_ColumnChanging(ByVal sender As

Object, _

3. ByVal e As DataColumnChangeEventArgs)

4. Dim str As String

5.

6. str = “Column: ” & e.Column.ColumnName.ToString

7. str += vbCrLf + “New Value: ” & e.ProposedValue

8. MessageBox.Show(str, “Column Changing”)

9. End Sub

10. Add the following event handler to the end of the New sub:

11. AddHandler dsMaster1.Categories.ColumnChanging, AddressOf

_

Me.Categories_ColumnChanging

12. Press F5 to run the application.

13. Change the Category Name to Beverages New, and then click the

Next button (“>”).

The application displays the column name and new value in a message box.

14. Close the message box, and then close the application.

15. Comment out the ColumnChanging event handler in the New sub.

Visual C# .NET

1. Add the following procedure to the class file:

2. private void Categories_ColumnChanging(object

3. sender, DataColumnChangeEventArgs e)

4. {

5. string str;

6.

7. str = “Column: ” + e.Column.ColumnName.ToString();

8. str += “\nNew Value: ” + e.ProposedValue;

9. MessageBox.Show(str, “Column Changing”);

}

10. Add the following event handler to the end of the frmMaster sub:

Microsoft ADO.Net – Step by Step 268

11. this.dsMaster1.Categories.ColumnChanging +=

12. new

DataColumnChangeEventHandler(this.Categories_ColumnCha

nging);

13. Press F5 to run the application.

14. Change the Category Name to Beverages New, and then click the

Next button (“>”).

The application displays the column name and new value in a message box.

15. Close the message box, and then close the application.

16. Comment out the event handler in the frmMaster sub.

The column change events are typically used for validating discrete values—for

example, if the value is within a specified range or has the correct format. For data

validation that relies on multiple column values, you can use the row change events.

Respond to a RowChanging Event

Visual Basic .NET

1. Add the following procedure to the code editor:

2. Private Sub Categories_RowChanging(ByVal sender As Object, _

3. ByVal e As DataRowChangeEventArgs)

4. Dim str As String

5.

6. str = “Action: ” & e.Action.ToString

7. str += vbCrLf + “ID: ” & e.Row(“CategoryID”)

8. MessageBox.Show(str, “Row Changing”)

9. End Sub

10. Add the following code to the end of the New sub:

11. AddHandler dsMaster1.Categories.RowChanging, AddressOf _

Me.Categories_RowChanging

12. Press F5 to run the application.

13. Change the Category Name to New, and then click Next button

(“>”). Close the Column Changing message.

The application displays the Action and Category ID.

Microsoft ADO.Net – Step by Step 269

14. Close the message, and then close the application.

15. Comment out the RowChanging event handlers in the New sub.

Visual C# .NET

1. Add the following procedure to the code editor:

2. private void Categories_RowChanging(object sender,

3. DataRowChangeEventArgs e)

4. {

5. string str;

6.

7. str = “Action: ” + e.Action.ToString();

8. str += “\nID: ” + e.Row[“CategoryID”];

9. MessageBox.Show(str, “Row Changing”);

}

10. Add the following code to the end of the frmMaster sub:

11. this.dsMaster1.Categories.RowChanging += new

12.

DataRowChangeEventHandler(this.Categories_RowChanging);

13. Press F5 to run the application.

14. Change the Category Name to New, and then click Next button

(“>”). Close the Column Changing message.

The application displays the Action and Category ID.

15. Close the message, and then close the application.

16. Comment out the ColumnChanging event handler in the frmMaster

sub.

Microsoft ADO.Net – Step by Step 270

Control Validation Events

In addition to the DataTable events, data validation can also be triggered by individual

controls. Every control supports the following events, in order:

§ Enter

§ GotFocus

§ Leave

§ Validating

§ Validated

§ LostFocus

In addition, the CurrencyManager object supports the ItemChanged event, which is

triggered before a new row becomes current.

Respond to an ItemChanged Event

Visual Basic .NET

1. Add the following procedure to the code editor:

2. Private Sub Categories_ItemChanged(ByVal sender As Object, _

3. ByVal e As ItemChangedEventArgs)

4. Dim str As String

5.

6. str = “Index into CurrencyManager List: ” &

e.Index.ToString

7. MessageBox.Show(str, “Item Changed”)

End Sub

8. Add the following code to the end of the New sub:

9. AddHandler CType(Me.BindingContext(Me.dsMaster1,

“Categories”), _

CurrencyManager).ItemChanged, AddressOf

Me.Categories_ItemChanged

10. Press F5 to run the application.

11. Delete the category description, and then click the Next button (“>”).

The application displays the index of the row that has been changed.

12. Close the message box, and then close the application.

13. Comment out the event handler in the New sub.

Visual C# .NET

1. Add the following procedure to the code editor:

2. private void Categories_ItemChanged(object sender,

3. ItemChangedEventArgs e)

4. {

Microsoft ADO.Net – Step by Step 271

5. string str;

6.

7. str = “Index into CurrencyManager List: ” + e.Index.ToString();

8. MessageBox.Show(str, “Item Changed”);

}

9. Add the following lines to the end of the frmMaster sub:

10. CurrencyManager cm = (CurrencyManager)

11. this.BindingContext[this.dsMaster1, “Categories”];

12. cm.ItemChanged += new

ItemChangedEventHandler(this.Categories_ItemChanged);

13. Press F5 to run the application.

14. Delete the category description, and then click the Next button (“>”).

The application displays the index of the row that has been changed.

15. Close the message box, and then close the application.

16. Comment out the event handler in the New sub.

For purposes of data validation, the Validating and Validated events roughly correspond

to the ColumnChanging and ColumnChanged events, but they have the advantage of

occurring as soon as the user leaves the control, rather than when the BindingContext

object is repositioned.

Respond to a Validating Event

Visual Basic .NET

1. In the code editor, select tbCategoryName in the Control Name

combo box, and then select Validating in the Method combo box.

Visual Studio adds the event handler template to the code editor.

2. Add the following code to the procedure:

3. If Me.tbCategoryName.Text = “Cancel” Then

4. MessageBox.Show(“Change the Name from ‘Cancel'”,

“Validating”)

5. e.Cancel = True

End If

6. Press F5 to run the application.

7. Change the Category Name to Cancel, and then click the Next button

(“>”).

The application cancels the change and redisplays the original row.

Microsoft ADO.Net – Step by Step 272

8. Close the application.

Visual C# .NET

1. Add the following procedure to the class file:

2. private void Categories_Validating(object sender,

CancelEventArgs e)

3. {

4. if (this.tbCategoryName.Text == “Cancel”)

5. {

6. MessageBox.Show(“Change the Name from ‘Cancel'”,

7. “Validating”);

8. e.Cancel = true;

9. }

}

10. Add the following lines to the frmMaster sub:

11. this.tbCategoryName.Validating +=

new CancelEventHandler(this.Categories_Validating);

12. Press F5 to run the application.

13. Change the Category Name to Cancel, and then click the Next

button (“>”).

The application cancels the change.

14. Close the application.

Microsoft ADO.Net – Step by Step 273

Using the ErrorProvider Component

In the previous exercises, we’ve used MessageBox controls in response to data

validation errors. This is a common technique, but it’s not a very good one from a

usability standpoint. MessageBox controls are disruptive, and after they are dismissed,

the error information contained in them also disappears.

Fortunately, the .NET Framework provides a much better mechanism for displaying

errors to the user: the ErrorProvider component. The ErrorProvider, which can be bound

to either a specific control or a data source object, dis-plays an error icon next to the

appropriate control. If the user places the mouse pointer over the icon, a ToolTip will

display the specified error message.

Use an ErrorProvider with a Form Control

Visual Basic .NET

1. In the code editor, select tbCategoryID in the Control Name combo

box, and then select Validating in the Method Name combo box.

Visual Studio adds the event handler template to the code editor.

2. Add the following code to the event handler:

3. If Me.tbCategoryID.Text = “Error” Then

4. Me.epControl.SetError(Me.tbCategoryID, _

5. “Please re-enter the CategoryID”)

6. e.Cancel = True

7. Else

8. Me.epControl.SetError(Me.tbCategoryID, “”)

End If

9. Press F5 to run the application.

10. Change the CategoryID to Error, and then click the Next button

(“>”).

The application displays a blinking error icon after the CategoryID control.

11. Place the mouse pointer over the icon.

The application displays the ToolTip.

Microsoft ADO.Net – Step by Step 274

12. Close the application.

Visual C# .NET

1. Add the following procedure to the class module:

2. private void Categories_Error(object sender, CancelEventArgs e)

3. {

4. if (this.tbCategoryID.Text == “Error”)

5. {

6. this.epControl.SetError(this.tbCategoryID, “Please re-

7. enter the CategoryID”);

8. e.Cancel = true;

9. }

10. else

11. {

12. this.epControl.SetError(this.tbCategoryID, “”);

13. }

}

14. Add the following line to the end of the frmMaster sub:

15. this.tbCategoryID.Validating +=

new CancelEventHandler(this.Categories_Error);

16. Press F5 to run the application.

17. Change the CategoryID to Error, and then click the Next button

(“>”).

The application displays a blinking error icon after the CategoryID control.

18. Place the mouse pointer over the icon.

The application displays the ToolTip.

Microsoft ADO.Net – Step by Step 275

19. Close the application.

The previous exercise demonstrated the use of the ErrorProvider from within the

Validating event of a control. But the ErrorProvider component can also be bound to a

data source, and it can display errors for any column or row containing errors.

Binding an ErrorProvider to a data source object has the advantage of allowing multiple

errors to be displayed simultaneously—a significant improvement in system usability.

Use an ErrorProvider with a DataColumn

Visual Basic .NET

1. In the form designer, select the epDataSet ErrorProvider control.

2. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

3. Select the DataMember property, expand the drop-down list, and then

select Categories.

4. Double-click the btnDataSet button.

Visual Studio adds the event handler template to the code editor.

5. Add the following code to the event handler:

6. Me.dsMaster1.Categories.Rows(0).SetColumnError(“Description”

, _

“Error Created Here”)

This code artificially creates an error condition for the Description column of

the second row in the Categories table.

7. Press F5 to run the application, and then click the DataSet Error

button.

Visual Studio displays an error icon after the Description text box.

Microsoft ADO.Net – Step by Step 276

8. Close the application.

Visual C# .NET

1. In the form designer, select the epDataSet ErrorProvider control.

2. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

3. Select the DataMember property, expand the drop-down list, and then

select Categories.

4. Double-click the btnDataSet button.

Visual Studio adds the event handler template to the code editor.

5. Add the following code to the event handler:

6. this.dsMaster1.Categories.Rows[0].SetColumnError(“Description”

,

“Error Created Here”);

This code artificially creates an error condition for the Description column of

the second row in the Categories table.

7. Press F5 to run the application, and then click the DataSet Error

button.

Visual Studio displays an error icon after the Description text box.

8. Close the application.

Chapter 11 Quick Reference

To Do this

Microsoft ADO.Net – Step by Step 277

To Do this

Use the Format

event

Create the event handler, changing the Value property of the

ConvertEventArgs parameter, and then bind it to the

control’s Format event

Use the Parse

event

Create the event handler, changing the Value property of the

ConvertEventArgs parameter, and then bind it to the

control’s Parse event

Use the

CheckBox

control to

display

Boolean values

in a DataTable

Bind the value of the control’s Checked property

Bind a

ComboBox to a

key value it

doesn’t display

Set the control’s DisplayMember property to the column to

be displayed, and set the ValueMember property to the key

value

Create a

nested ListBox

Set the ListBox’s DisplayMember property to the entire

hierarchy, including the DataRelation:

myListBox.DisplayMember =

“tblParent.drRelation.tblChild.Column”

Display

hierarchical

data using the

DataGrid

control

In the form designer, use the DataGridTableStyle Collection

editor (available from the TableStyles property in the

Properties Window) to add the related tables to the DataGrid

Display

hierarchical

data using the

TreeView

control

Use the DataRow’s GetChildRows method to manually add

the nodes to the TreeView’s Nodes array:

for each mainRow in masterTable

rootNode = _

myTreeView.Nodes.Add(mainRow.myColumn)

childArray = _

mainRow.GetChildRows(“myRelation”)

for each childRow in childArray

rootNote.Nodes.Add(childRow.myColumn)

next childRow

next mainRow

Find rows

based on the

Sort column

Use the DataView’s Find method to return the position of the

row:

rowIndex = myDataView.Find(theKey)

myBindingContext.Position = rowIndex

Find Rows

based on an

Unsorted

Column

Use the DataTable’s Select method to return the row, and

then use the DataView’s Find method to find its position:

drFound = myTable.Select(strCriteria)

rowSortKey = drFound(0).myColumn

rowIndex = myDataView.Find(rowSortKey)

myBindingContext.Position = rowIndex

Validate Data

at the

DataTable level

Respond to one of the DataTable change events:

ColumnChanging, ColumnChanged, RowChanging,

RowChanged, RowDeleting, or RowDeleted

Validate Data

at the Control

level

Respond to one of the Control validation events:

Enter, GotFocus, Leave, Validating, Validated, LostFocus

Microsoft ADO.Net – Step by Step 278

To Do this

Use an

ErrorProvider

with a Form

Control

Set the ErrorProvider’s ContainerControl property to the

control, and then, if necessary, call the SetError method to

display an error condition from within the control’s Validating

event

Chapter 12: Data-Binding in Web Forms

Overview

In this chapter, you’ll learn how to:

§ Simple-bind controls at design time

§ Simple-bind controls at run time

§ Display bound data on a page

§ Complex-bind controls at design time

§ Complex-bind controls at run time

§ Use the DataBinder object

§ Store a DataSet in the session state

§ Store a DataSet in the ViewState

§ Update a data source using a Command object

In the previous eleven chapters, we’ve examined the ADO.NET object model, using

examples in Windows forms. In this chapter, we’ll examine the way that Microsoft

ADO.NET interacts with Microsoft ASP.NET and Web forms.

Understanding Data-Binding in Web Forms

As part of the Microsoft .NET Framework, ADO.NET is independent of any application in

which it is deployed, whether it’s a Windows form, like the exercises in the previous

chapters, a Web form, or a middle-level business object. But the way that data is pushed

to and pulled from controls is a function of the control itself, not of ADO.NET, and the

Web form data-binding architecture is very different from anything we’ve seen so far.

The Web form data-binding architecture is based on two assumptions. The first

assumption is that the majority of data access is read-only—data is displayed to users,

but in most cases, it is not updated by them. The second assumption is that performance

and scalability, while not insignificant in the Microsoft Windows operating system, are of

critical importance when applications are deployed on the Internet.

To optimize performance for read-only data access, the .NET Framework Web form

data-binding architecture is also read-only—when you bind a control to a data source,

the data will only be pushed to the bound property; it will not be pulled back from the

control.

This doesn’t mean that it’s impossible, or even particularly difficult, to edit data by using

Web forms, but it has to be done manually. As a simple example, if you have a Windows

Form TextBox control bound to a column in a DataSet, and the user changes the value

of that TextBox, the new value will be automatically propagated to the DataSet by the

.NET Framework, and the Item, DataColumn, and DataRow change events will be

triggered.

If a TextBox control on a Web form is bound to a column in a DataSet, however, the user

must explicitly submit any changes to the server, and you must write the code to handle

Microsoft ADO.Net – Step by Step 279

the submission, both on the client and the server. After the changes reach the DataSet,

of course, the DataColumn and DataRow change events will still be triggered.

Most of this arises from the nature of the Internet itself. In a traditional Web programming

environment, a page is created, sent to the user’s browser, and then the user, the page,

and any information the page contains are forgotten. In other words, the Internet is, by

default, stateless—the state of a page is not maintained between round-trips to the

server.

ASP.NET, the part of the .NET Framework that supports Web development, supports a

number of mechanisms for maintaining state, where appropriate, on both the client and

server. We’ll examine some of these as they relate to data access later in this chapter.

In addition to being stateless, traditional Internet applications are also disconnected.

When working with older data object models, this can sometimes be a problem, but as

we’ve seen, ADO.NET itself uses a disconnected data model, so this poses no problem.

Data Sources

Like controls on Windows forms, Web form controls can be bound to any data source,

not only traditional database tables. Technically, to qualify as a Web form data source,

an object must implement the IEnumerable interface. Arrays, Collections,

DataReaders, DataSets, DataViews, and DataRows all implement the IEnumerable

interface, and any of them can be used as the data source for a Web form control.

Because the management of server resources and the resulting scalability

issues are critical in the Internet environment, the choice of data access

methods must be given careful consideration. In most cases, when data is read

into the page and then discarded, it’s better to use an ADO.NET DataReader

rather than a DataSet because a DataReader provides better performance and

conserves server memory. However, this isn’t always the case, and there are

situations in which using the DataSet is both easier and more efficient.

If, for example, you’re working with related data, the DataSet object, with its

support for DataRelations and its GetChildRows and GetParentRows methods,

is both easier to implement and more efficient because it requires fewer roundtrips

to the data source. Also, as we’ll see in Chapter 15, the DataSet provides

the mechanism for reading data from and writing data to an XML stream.

Finally if the data will be accessed multiple times, as it is when you’re paging

through data, it can be more efficient to store a DataSet than to re-create it

each time. This isn’t always the case, however. In some situations, the memory

that is required to store a large DataSet outweighs the performance gains from

maintaining the data. Also, if the data being stored is at all vol-tile, you run the

risk of the stored data becoming out of sync with the primary data store.

Roadmap We’ll examine binding to DataRelations in Chapter 13.

There is one other major difference in the data-binding architectures of

Windows and Web forms: Web forms do not directly support data-binding to an

ADO.NET DataRelation object. As we saw in Chapter 11, binding to a

DataRelation provides a simple and efficient method for displaying

master/detail relationships. To perform the same function in a Web form, you

must use the DataBinder property. We’ll examine binding to DataRelations in

Chapter 13.

Binding Controls to an ADO.NET Data Source

Like controls on Windows forms, Web form controls support simple-binding virtually any

property to a single value in data source and complex-binding control properties that

Microsoft ADO.Net – Step by Step 280

display multiple values. However, the binding mechanisms for Web forms are somewhat

different from those that we’ve seen and used with Windows forms.

Note In the Web form document ation, simple- and complex-binding are

referred to as single-value and multirecord binding.

Simple-Binding Control Properties

Web form controls can always be bound at run time. They can also be bound at design

time if the data source is available. (Because Web Forms applications tend to use Data

commands more often than DataSets, the data source is less often available at design

time.)

Unlike Windows forms, simple-bound Web form control properties don’t expose databinding

properties. Instead, the value is explicitly retrieved and assigned to the property

at run time by using a data-binding expression.

In Microsoft Visual Studio .NET, the Properties window supports a tool for creating databinding

expressions, or you can create them at run time. The run time data-binding

expression is delimited by <%# and %>:

propName = (<%# dataExpression %>)

The dataExpression can be any expression that resolves to a single data item—a

column of a DataRow, a property of another control on the page, or even an expression.

Note, however, that Web forms don’t support a BindingContext object or anything similar

to it, so there is no concept of a current row. You must specifically indicate which row of

a data source, such as a DataTable, will be displayed in the bound property. So, for

example, to refer to a DataColumn within a DataSet, you would need to use the following

syntax:

<%# myDataSet.myTable.DefaultView(0).myColumn %>

You can use a data-binding expression almost anywhere in a Web form page, as long as

the expression evaluates at run time to the correct data type. You can, of course, use

type-casting to coerce the value to the correct type. For example:

myTextbox.Text = <%# myDataSet.myTable.Rows.Count.ToString() %>

Simple-Bind a Control Property at Design Time

1. Open the WebForms project from the Start page or the File menu.

2. In the Solution Explorer, double-click WebForm1.aspx.

Visual Studio displays the page in the form designer.

3. Select the tbCategoryName text box.

4. In the Properties window, select (DataBindings) and click the Ellipsis

button.

Microsoft ADO.Net – Step by Step 281

Visual Studio opens the DataBindings dialog box.

5. In the Simple Binding pane, expand

dsMaster1/Categories/DefaultView/DefaultView.[0], and select

CategoryName.

6. Click OK.

Visual Studio creates the binding.

Note You can examine the syntax of the data-binding attribute on the HTML

tab of the project. Find the tag that defines the tbCategoryName text

box.

If the data source isn’t available at design time, you can bind a control property at run

time. Although it’s possible to do this in the control tag, it’s much easier to do so by using

the DataBinding event that is raised when the DataBind method is called for the control.

Simple-Bind a Control Property at Run Time

Visual Basic .NET

1. Press F7 to display WebForm1.aspx.vb.

2. Select tbCategoryDescription in the Control Name combo box, and

then select DataBinding in the Method Name combo box.

Microsoft ADO.Net – Step by Step 282

Visual Studio adds the event handler.

3. Add the following code to the procedure:

Me.tbCategoryDescription.Text = Me.dsMaster1.Categories(0).Description

Visual C# .NET

1. Select tbCategoryDescription in the form designer.

2. In the Properties Window, click the Events button, and then doubleclick

DataBinding.

Visual Studio opens the code window and adds the event handler.

3. Add the following code to the procedure:

4. this.tbCategoryDescription.Text =

this.dsMaster1.Categories[0].Description;

Just as with Windows forms, before you can display the data on your Web form, you

must explicitly load it from the data source by filling a DataAdapter or executing a Data

command. But Web forms require an additional step: You must push the data into the

control properties.

This is done by calling the DataBind method, which is implemented by all controls that

inherit from System.Web.UI.Control. A call to the DataBind method cascades to its child

controls. Thus, calling DataBind for the Page class will call the DataBind method for all

the controls contained by the Page class.

When the DataBind method is invoked for a control, either directly or by cascading, the

data expressions embedded in control tags will be resolved and the DataBinding events

for the controls will be triggered.

If you’re using a Web form to update data, you must be careful when you call the

DataBind method. Much like a DataSet’s AcceptChanges method, DataBind replaces the

values currently contained in the bound properties.

Display Bound Data in the Page

Visual Basic .NET

1. In the code editor, add the following code to the Page_Load event:

2. Me.daCategories.Fill(Me.dsMaster1.Categories)

3. Me.daProducts.Fill(Me.dsMaster1.Products)

4. Me.daOrders.Fill(Me.dsMaster1.Orders)

Me.DataBind()

This code fills the three tables in the DataSet, and then calls the DataBind

event for the page, which will push the data into each of the bound controls

that it contains.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser.

6. Close the browser.

Microsoft ADO.Net – Step by Step 283

Visual C# .NET

1. In the code editor, add the following code to the Page_Load event:

2. this.daCategories.Fill(this.dsMaster1.Categories);

3. this.daProducts.Fill(this.dsMaster1.Products);

4. this.daOrders.Fill(this.dsMaster1.Orders);

this.DataBind();

5. Press F5 to run the application.

Visual Studio displays the page in the default browser.

6. Close the browser.

Complex-Binding Control Properties

The process of complex-binding Web form controls closely resembles the process for

complex-binding Windows form controls. Complex-bound controls in both environments

expose the DataSource and DataMember properties for defining the source of the data,

and Web form controls expose a DataValueField property that is equivalent to the

ValueMember property of a Windows form control.

The DataList and DataGrid controls also expose a DataKeyField property that stores the

primary key information within the data source. The DataKeyField, which populates a

DataKeyFields collection, allows you to store the primary key information without

necessarily displaying it in the control.

In addition, the ListBox, DropDownList, CheckBoxList, RadioButtonList, and HtmlSelect

controls expose a DataTextField property that defines the column to be displayed. The

DataTextField property is equivalent to the DisplayMember property of a Windows form

control.

Roadmap We’ll examine binding to DataRelations in Chapter 13.

If the DataSource property is being set to a DataSet and the DataMember property is

being set to a DataTable, you can simply set the properties directly. As we’ll see in

Chapter 13, it is also possible to bind to DataRelations, but the process is somewhat less

than straightforward.

Complex-Bind a Control at Design Time

1. Display the form designer.

2. Select the dgProducts DataGrid.

3. In the Properties window, expand the Data section (if necessary),

select the DataSource property, and then select dsMaster1 in the

drop-down list.

Note Clear the Events button if you’re working in C#.

4. Select the DataMember property, and then select Products.

5. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 284

Visual Studio displays the page in the default browser, showing all the

products in the data grid.

6. Close the browser.

In this exercise, we’ll bind the lbOrders ListBox control in response to the

SelectedItemChanged event of the dgProducts DataGrid control. The

SelectedItemChanged event occurs when the user clicks one of the Orders buttons in

the DataGrid because its CommandName property has been set to Select.

Complex-Bind a Control at Run Time

Visual Basic .NET

1. In the form designer, double-click the dgProducts DataGrid control.

Visual Studio adds a SelectedIndexChanged event handler to the code editor.

2. Add the following code to the procedure:

3. Me.dvOrders.Table = Me.dsMaster1.Orders

4. Me.dvOrders.RowFilter = “ProductID = ” & _

5. Me.dgProducts.SelectedItem.Cells(1).Text

6. Me.lbOrders.DataSource = Me.dvOrders

7. Me.lbOrders.DataTextField = “OrderDate”

Me.lbOrders.DataBind()

The code sets the RowFilter property of the dvOrders DataView to the

ProductID of the row selected in the DataGrid. It then sets the DataSource

and DataMember properties of the ListBox, and then calls the DataBind

method to push the data to the control.

8. Press F5 to run the application.

Visual Studio displays the page in the default browser.

9. Click the Orders button in one of the rows in the data grid.

The page displays the order dates in the list box. Note that the browser made

a round-trip to the server to retrieve the data.

Microsoft ADO.Net – Step by Step 285

10. Close the browser.

Visual C# .NET

1. In the form designer, double-click the dgProducts DataGrid control.

Visual Studio adds a Select Click handler to the code editor.

2. Add the following code to the procedure:

3. this.dvOrders.Table = this.dsMaster1.Orders;

4. this.dvOrders.RowFilter = “ProductID = ” +

5. this.dgProducts.SelectedItem.Cells[1].Text;

6. this.lbOrders.DataSource = this.dvOrders;

7. this.lbOrders.DataTextField = “OrderDate”;

this.lbOrders.DataBind();

The code sets the RowFilter property of the dvOrders DataView to the

ProductID of the row selected in the DataGrid. It then sets the DataSource

and DataMember properties of the ListBox, and then calls the DataBind

method to push the data to the control.

8. Press F5 to run the application.

Visual Studio displays the page in the default browser.

9. Click the Orders button in one of the rows in the data grid.

The page displays the order dates in the list box. Note that the browser made

a round-trip to the server to retrieve the data.

10. Close the browser.

Microsoft ADO.Net – Step by Step 286

Using the DataBinder Object

In addition to embedding data-binding expressions directly in the HTML stream, the .NET

Framework also exposes the DataBinder object, which evaluates data-binding

expressions and optionally formats the result as a string.

The DataBinder syntax is straightforward, and it can perform type conver-sion

automatically, which greatly simplifies coding in some circumstances. This is particularly

true when working with an ADO.NET object—multiple castings are required, and the

syntax is complex. However, the DataBinder object is late-bound, and like all late-bound

objects, it does incur a performance penalty, primarily due to its type conversion.

The DataBinder object is a static object, which means that it can be used without

instantiation. It can be called either from within the HTML for the page (surrounded by

<%# and %> brackets) or in code.

The DataBinder object exposes no properties or events, and only a single method, Eval.

The Eval method is overloaded to accept an optional format string, as shown in Table

12-1.

Table 12-1: Eval Methods

Method Description

Eval(dataSource, dataExpression) Returns the

value of

dataExpress

ion in the

dataSource

at run time

Eval(dataSource, dataExpression, formatStr) Returns the

value of

dataExpress

ion in the

dataSource

at run time,

and then

formats it

according to

the

formatStr

The Eval method expects a data container object as the first parameter. When working

with ADO.NET objects, this is usually a DataSet, DataTable, or DataView object. It can

also be the Container object if the expression runs from within a List control in a

template, such as a DataList, DataGrid, or Repeater, in which case the first parameter

should always be Container.DataItem.

The second parameter of the Eval method is a string that represents the specific data

item to be returned. When working with ADO.NET objects, this parameter would typically

be the name of a DataColumn, but it can be any valid data expression.

The final, optional parameter is a format specifier identical in format to those used by the

String.Format method. If the format specifier is omitted, the Eval method returns an

object, which must be explicitly cast to the correct type.

Use the DataBinder to Bind a Control Property

Visual Basic .NET

1. In the code editor, select tbCategoryID in the Control Name combo

box, and then select DataBinding in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following line to the procedure:

3. Me.tbCategoryID.Text = _

Microsoft ADO.Net – Step by Step 287

4. DataBinder.Eval(Me.dsMaster1.Categories.DefaultView(0), _

“CategoryID”)

Notice that you must explicitly record the first row of the DataTable’s

DefaultView. This is because Web forms have no CurrencyManager to handle

retrieving a current row from the DataSet.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser with the CategoryID

value.

6. Close the browser.

Visual C# .NET

1. In the form designer, select tbCategoryID, display the events in the

Properties Window, and double-click DataBinding.

Visual Studio adds the event handler to the code editor window.

2. Add the following line to the procedure:

3. this.tbCategoryID.Text =

4. DataBinder.Eval(this.dsMaster1.Categories.DefaultView[0],

“CategoryID”).ToString();

Notice that you must explicitly record the first row of the DataTa ble’s

DefaultView. This is because Web forms have no CurrencyManager to handle

retrieving a current row from the DataSet.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser with the CategoryID

value.

6. Close the browser.

Microsoft ADO.Net – Step by Step 288

Maintaining ADO.NET Object State

Because the Web form doesn’t maintain state between round-trips to the server, if you

want to maintain a DataSet between the time that the page is first created and the time

that it takes the user to send it back with changes, you must do so explicitly.

You can maintain a DataSet on the server by storing it in either the Application or

Session state, or you can maintain it on the client by storing it in the Page class’s

ViewState. You can also store the DataSet in a hidden field on the page, although

because this is how the Page class implements ViewState, there’s rarely any advantage

to doing so.

Whether you maintain the data on the server or the page, you must always be aware of

concurrency issues. You’re saving round-trips to the data source, and the performance

gains can be significant, particularly if the data requires calculations. However, changes

to the data source won’t be reflected in the stored data. If the data is volatile, you must

re-create the ADO.NET objects each time in order to ensure that they reflect the most

recent changes.

Maintaining ADO.NET Objects on the Server

ASP.NET provides a number of mechanisms for maintaining state within an Internet

application. On the server side, the two easiest mechanisms to use are the Application

state and the Session state. Both state structures are dictionaries that store data as

name/value pairs. The value is stored and retrieved as an object, so you must cast it to

the correct type when you restore it.

The Application and Session states are used identically; the difference is scope. The

Application state is global to all pages and all users within the application. The Session

state is specific to a single browser session. (Please refer to the ASP.NET

documentation for additional information about Application and Session states.)

The IsPostBack property of the Page class, which is False the first time a Page is loaded

for a specific browser session and True thereafter, can be used in the Page_Load event

to control when the data is created and loaded.

Store the DataSet in the Session State

Visual Basic .NET

1. Change the Page_Load event to store the DataSet in the Session

state:

2. If Me.IsPostBack Then

3. Me.dsMaster1 = CType(Session(“dsMaster”), DataSet)

4. Else

5. Me.daCategories.Fill(Me.dsMaster1.Categories)

6. Me.daProducts.Fill(Me.dsMaster1.Products)

7. Me.daOrders.Fill(Me.dsMaster1.Orders)

8. Session(“dsMaster”) = Me.dsMaster1

9. End If

Me.DataBind()

10. Press F5 to run the application.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 289

11. Click several items in the dgProducts data grid.

You might be able to notice a slight increase in performance.

12. Close the browser.

Visual C# .NET

1. Change the Page_Load event to store the DataSet in the Session

state:

2. if (this.IsPostBack)

3. this.dsMaster1 = (dsMaster) Session[“dsMaster”];

4. else

5. {

6. this.daCategories.Fill(this.dsMaster1.Categories);

7. this.daProducts.Fill(this.dsMaster1.Products);

8. this.daOrders.Fill(this.dsMaster1.Orders);

9. this.Session[“dsMaster”] = this.dsMaster1;

10. }

this.DataBind();

11. Press F5 to run the application.

Visual Studio displays the page in the default browser.

12. Click several items in the dgProducts data grid.

You may be able to notice a slight increase in performance.

13. Close the browser.

Maintaining ADO.NET Objects on the Page

Storing data on the server can be convenient, but it does consume server resources

which, in turn, negatively impacts application scalability. An alternative is to store the

data on the page itself. This relieves the pressure on the server, but because the data is

passed as part of the data stream, it can increase the time it requires to load and post

the page.

Microsoft ADO.Net – Step by Step 290

Data is stored on the page either in a custom hidden field or in the ViewState property of

a control. In theory, any ViewState property can be used, but the Page class’s ViewState

is the most common property.

Store the DataSet in the ViewState

Visual Basic .NET

1. Change the Page_Load event handler to store the data in the Page

class ViewState:

2. If Me.IsPostBack Then

3. Me.dsMaster1 = CType(ViewState(“dsMaster”), DataSet)

4. Else

5. Me.daCategories.Fill(Me.dsMaster1.Categories)

6. Me.daProducts.Fill(Me.dsMaster1.Products)

7. Me.daOrders.Fill(Me.dsMaster1.Orders)

8. ViewState(“dsMaster”) = Me.dsMaster1

9. End If

10. Me.DataBind()

11. Press F5 to run the application.

Visual Studio displays the page in the default browser.

12. Click several items in the dgProducts data grid.

13. Close the browser.

Visual C# .NET

1. Change the Page_Load event handler to store the data in the Page

class ViewState:

2. if (this.IsPostBack)

3. this.dsMaster1 = (dsMaster) ViewState[“dsMaster”];

4. else

5. {

6. this.daCategories.Fill(this.dsMaster1.Categories);

7. this.daProducts.Fill(this.dsMaster1.Products);

8. this.daOrders.Fill(this.dsMaster1.Orders);

9. this.ViewState[“dsMaster”] = this.dsMaster1;

10. }

11. this.DataBind();

12. Press F5 to run the application.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 291

13. Click several items in the dgProducts data grid.

14. Close the browser.

Updating a Data Source from a Web Form

Remember that ADO.NET objects behave in exactly the same manner when they’re

instantiated in a Web form page as when they’re used in a Windows form. Because of

this, in theory, the processes of updating a data source should be identical.

On one level, this is true. The actual update is performed by directly running a Data

command or by calling the Update method of a DataAdapter. But remember that a Web

form page doesn’t maintain its state and that data-binding architecture is one-way.

Because the Web form data-binding architecture is one-way, you must explic-itly push

the values returned by the page into the appropriate object. With a Windows form, after a

control property has been bound to a column in a DataTable, any changes that the user

makes to the value will be immediately and automatically reflected in the DataTable.

On a Web form, on the other hand, you must explicitly retrieve the value from the control

and update the ADO.NET object. You might, for example, use the control values to set

the parameters of a Data command or update a row in a DataTable.

Update a Data Source Using a Command Object

Visual Basic .NET

1. Change the Page_Load event code to read:

2. If Not IsPostBack Then

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5. Me.daOrders.Fill(Me.dsMaster1.Orders)

6. Me.DataBind()

End If

The IsPostBack property prevents the Fill and DataBind methods from being

called when the page is posted back. Remember that DataBind replaces

existing values.

7. In the code editor, select btnCommand in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

8. Add the following code to the event handler:

9. Dim cmdUpdate As System.Data.OleDb.OleDbCommand

10. cmdUpdate = Me.daCategories.UpdateCommand

11.

12. With cmdUpdate

Microsoft ADO.Net – Step by Step 292

13. .Parameters(0).Value = Me.tbCategoryName.Text

14. .Parameters(1).Value = Me.tbCategoryDescription.Text

15. .Parameters(2).Value = Me.tbCategoryID.Text

16. End With

17.

18. Me.cnNorthwind.Open()

19. cmdUpdate.ExecuteNonQuery()

Me.cnNorthwind.Close()

The code uses the UpdateCommand of the daCategories DataAdapter to

perform the update. (This is a shortcut that wouldn’t ordinarily be available.)

The three parameters are set to the values of the relevant fields on the page,

and then the Connection is opened, the Command is executed, and the

Connection is closed.

20. Press F5 to run the application.

Visual Studio displays the page in the default browser.

21. Change the Category Name to Categories New.

22. Click Command.

The page updates the database.

23. Close the browser.

Visual C# .NET

1. Change the Page_Load event code to read:

2. if (IsPostBack == false)

3. {

4. this.daCategories.Fill(this.dsMaster1.Categories);

5. this.daProducts.Fill(this.dsMaster1.Products);

6. this.daOrders.Fill(this.dsMaster1.Orders);

7. this.DataBind();

}

The IsPostBack property prevents the Fill and DataBind methods from being

called when the page is posted back. Remember that DataBind replaces

existing values.

8. In the form designer, double-click btnCommand.

Visual Studio adds the event handler to the code.

9. Add the following code to the event handler:

10. System.Data.OleDb.OleDbCommand cmdUpdate;

Microsoft ADO.Net – Step by Step 293

11. cmdUpdate = this.daCategories.UpdateCommand;

12. cmdUpdate.Parameters[0].Value =

this.tbCategoryName.Text;

13. cmdUpdate.Parameters[1].Value =

this.tbCategoryDescription.Text;

14. cmdUpdate.Parameters[2].Value =

this.tbCategoryID.Text;

15.

16. this.cnNorthwind.Open();

17. cmdUpdate.ExecuteNonQuery();

this.cnNorthwind.Close();

The code uses the UpdateCommand of the daCategories DataAdapter to

perform the update. (This is a shortcut that wouldn’t ordinarily be available.)

The three parameters are set to the values of the relevant fields on the page,

and then the Connection is opened, the Command is executed, and the

Connection is closed.

18. Press F5 to run the application.

Visual Studio displays the page in the default browser.

19. Change the Category Name to Categories New.

20. Click Command.

The page updates the database.

21. Close the browser.

Chapter 12 Quick Reference

To Do this

Simple-bind a

control at design

time

Use the dialog displayed when you click the Ellipsis

button in the DataBindings property in the Properties

Window

Simple-bind a

control at run time

Push the data into the control in the control’s

DataBinding event:

myControl.Text = myTable[0].myColumn

Display bound data

on a page

Call the DataBind method for the Page, or individual

controls:

Me.DataBind()

Complex-bind

controls at design

time

Set the DataSource and DataMember properties in the

Properties Window

Microsoft ADO.Net – Step by Step 294

To Do this

Complex-bind

controls at run time

Set the DataSource, DataMember and, if applicable, the

DataTextField properties of the control, and call its

DataBind method

Use the DataBinder

object

Call its Eval method, passing in the container and

column values:

myControl.Text =

DataBinder.Eval(myTable[0], “myColumn”)

Store data in the

Session state

Set or retrieve the DataSet based on the IsPostBack

property:

If Me.IsPostBack Then

myTable = CType(Session(“myTable”),

DataTable)

Else

myDA.Fill(myTable)

Session(“myTable”) = myTable

EndIf

Store data in the

ViewState

Set or retrieve the DataSet based on the IsPostBack

property:

If Me.IsPostBack Then

myTable = CType(ViewState(“myTable”),

DataTable)

Else

myDA.Fill(myTable)

ViewState(“myTable”) = myTable

EndIf

Chapter 13: Using ADO.NET in Web Forms

Overview

In this chapter, you’ll learn how to:

§ Display data in a DataGrid control

§ Implement sorting in a DataGrid control

§ Display data in a DataList control

§ Display a DataList control as flowed text

§ Implement paging in a DataGrid control

§ Implement manual navigation in a Web form

§ Use validation controls to control user entry

In the previous chapter, we examined the basic data-binding architecture for Web forms.

In this chapter, we’ll examine a few common data-binding tasks in more detail.

Using Template-Based Web Controls

Microsoft ASP.NET Web Forms expose two controls that are specifically designed to

display data: the DataGrid and DataList. Both controls display the rows of a data source,

but vary in their capabilities.

Like its Windows forms equivalent, the DataGrid control displays data in a tabular format.

It provides intrinsic support for in-place editing and paging data, but it has relatively

limited formatting capabilities. The DataList control also provides intrinsic support for inplace

editing, and allows for more flexible formatting.

Microsoft ADO.Net – Step by Step 295

The Microsoft .NET Framework also supports a Repeater control that allows almost

unlimited formatting capability, but it has limited support in the Design View of the Page

Designer—the majority of the formatting must be done directly in the HTML View of the

Page Designer.

All three of these controls support templates, which are sets of controls that define the

content of each section of the control. (A template is not the same as a style, which

defines appearance, rather than content.) The template sections that are available, as

well as the precise behavior of each section, differ between controls.

The DataGrid control, for example, doesn’t support an AlternatingItemTemplate, and its

ItemTemplates define the contents of a column, while the ItemTemplate for a DataList

defines the contents of a row. We’ll examine the specific templates supported by each

control later in this chapter.

All three template-based controls can contain buttons that raise events on the server. As

we’ll see, the DataGrid and DataList controls have intrinsic support for in-place editing,

and all three controls also support user-defined buttons. When a user clicks a userdefined

button, an ItemCommand event is sent to the control that contains the template.

The ItemCommand’s event argument parameter exposes the properties required to

determine which button and which item within the control triggered the event. The three

controls expose different classes of event arguments, but all three expose the same

properties, as shown in Table 13-1.

Table 13-1: ItemCommand Event Arguments

Property Description

CommandArgument String used

as an

argument

for the

command

CommandName String used

to determine

the

command to

be

performed

CommandSource The button

that

generated

the event

Item The

selected

item in the

containing

control

The CommandArgument and CommandName properties are defined when the button is

added to the control. The CommandSource property refers to the button itself, while the

Item is the selected row in the control.

Using the DataGrid Control

As with Windows forms, the DataGrid control is bound to a data source by using the

DataSource and DataMember properties. One row will be displayed in the DataGrid for

every row in the data source. By default, a column will be displayed for each column in

the data source, but as we’ll see, this can be configured through the Property Builder.

Microsoft ADO.Net – Step by Step 296

In addition to the DataSource and DataMember properties, the DataGrid control exposes

a DataKeyField, which is roughly equivalent to the ValueMember property of the

Windows form version and can be set to the name of a column in the data source that

uniquely identifies each row. The column specified as the DataKeyField doesn’t need to

be displayed in the DataGrid. Note, however, that the DataKeyField doesn’t support

multicolumn keys.

Add a DataGrid to a Web Form

1. Open the UsingWebForms project from the Start page or the File

menu.

2. In the Solution Explorer, double-click the DataGrid.aspx file.

Microsoft Visual Studio .NET opens the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the dgCategories Property Builder.

4. Set dvCategories as the DataSource and CategoryID as the

DataKeyField.

Microsoft ADO.Net – Step by Step 297

The columns displayed in the DataGrid are defined on the Columns tab of the Property

Builder. Five types of columns are available, as shown in Table 13-2.

Table 13-2: DataGrid Column Types

Column Type Description

Bound A column

from the

data source

Button A button

with custom

functionality

Select An intrinsic

button that

allows a row

to be

selected

Edit, Update, Cancel Intrinsic

buttons that

support inplace

editing

Delete An intrinsic

button that

allows a row

to be

deleted

Hyperlink Displays the

data as a

hyperlink

Template Custom

combination

s of

controls,

which may

be databound

Microsoft ADO.Net – Step by Step 298

A Bound column displays a column from the data source. You can determine whether

the column is visible and whether it is read-only in the Property Builder. The Property

Builder also allows you to specify a data formatting expression to control the way the

data is displayed.

A Button column is a user-defined control. You specify fixed text for the button or bind

the text to a column in the data source by setting its TextField property.

In addition to the generic Button column, the DataGrid exposes a set of intrinsic buttons

to support in-place editing: Edit, Update, and Cancel (which work as a set), Select, and

Delete. As we’ll see, these intrinsic buttons trigger custom server-side events rather than

the generic ItemCommand. The Select and Delete buttons can be data-bound by setting

their TextField properties.

A Hyperlink column embeds an <HREF> tag in the text, allowing the user to navigate to

a different page by selecting a value in the column.

Finally the Template column allows a fine degree of formatting control by using the

Template editor. Any of the other column types can also converted to Template columns

by clicking the Link button in the Properties window. We’ll examine the use of Template

columns later in this chapter.

Add Data-Bound Columns to a DataGrid

1. Select Columns in the left pane of the Property Builder.

Visual Studio displays the Columns tab.

2. Clear the Create Column Automatically At Run Time check box.

3. In the Available Columns list, expand the Button column node, choose

the Select Column type, and then click the Add button (“>”) to move it

to the Selected Columns list.

Microsoft ADO.Net – Step by Step 299

4. Delete the Text property, and then set the TextField property to

CategoryID.

5. In the Available Columns list, expand the Data Fields node (if

necessary), and then move CategoryName and Description to the

Selected Columns list.

Microsoft ADO.Net – Step by Step 300

6. Click OK.

Visual Studio configures the DataGrid columns.

7. Press F5.

Visual Studio displays the page in the default browser.

8. Close the browser.

Unlike the other two template-based controls, the DataGrid control doesn’t require you to

specify the contents of each template. Except for columns that are explicitly declared to

be Template columns, the general formatting of the DataGrid controls the contents and

Microsoft ADO.Net – Step by Step 301

the layout of each section. You can con-vert any column to a Template column by

clicking the Link button in the Property Builder.

Template columns in the DataGrid expose the following sections:

§ HeaderTemplate

§ FooterTemplate

§ ItemTemplate

§ AlternatingItemTemplate

§ EditItemTemplate

§ Pager

The HeaderTemplate and FooterTemplate sections define the layout of the fixed top and

bottom sections of the DataGrid. The ItemTemplate and AlternatingItemTemplate

sections define the controls used to display values, while the EditItemTemplate section

defines the controls that are used to edit the values. The Pager section is used for

automatic data paging, which we’ll discuss later in this chapter.

Add a Template Column to the DataGrid

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. Select Columns in the left pane of the Property Builder.

Visual Studio displays the Columns tab.

3. In the Available Columns list, expand the Data Fields node (if

necessary), and then add Current to the Selected Columns list. Use

the up and down arrows to position the Current column between the

Button column and CategoryName.

4. Click the link labeled Convert This Column Into a Template Column.

Visual Studio displays the Template column properties.

Microsoft ADO.Net – Step by Step 302

5. Click OK.

Visual Studio adds the column to the DataGrid in the form designer.

6. Right -click the DataGrid in the form designer. On the context menu,

choose Edit Template, and then on the submenu, select Columns[1]—

Current.

Visual Studio displays the Template editor.

Microsoft ADO.Net – Step by Step 303

7. Delete the label in the ItemTemplate section, and then drag a

CheckBox control from the Web Forms tab of the Toolbox onto the

ItemTemplate section.

8. Use the same procedure to replace the TextBox control in the EditItem

section with a CheckBox control.

9. Right -click the Template editor, and then choose End Template

Editing.

Visual Studio displays the column as a CheckBox in the form designer.

10. Press F5.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 304

11. Close the browser.

In addition to the ItemCommand event, which is raised by custom buttons (columns of

Button type), the DataGrid also exposes the events shown in Table 13-3.

Table 13-3: DataGrid Events

Event Description

ItemCreated Occurs

when an

item in the

DataGrid is

first created

ItemDataBound Occurs after

the item is

bound to a

data value

EditCommand Occurs

when the

user clicks

the intrinsic

Edit button

DeleteCommand Occurs

when the

user clicks

the intrinsic

Delete

button

UpdateCommand Occurs

when the

user clicks

the intrinsic

Update

button

CancelCommand Occurs

when the

user clicks

the intrinsic

Cancel

button

SortCommand Occurs

when a

column is

Microsoft ADO.Net – Step by Step 305

Table 13-3: DataGrid Events

Event Description

sorted

PageIndexChanged Occurs

when a

page index

item is

clicked

The ItemCreated and ItemDataBound events occur during the initial layout of the page.

They’re typically used to format data or other elements on the page. The Edit, Delete,

Update, and Cancel commands are triggered by the intrinsic in-place editing buttons.

The SortCommand event occurs when the DataGrid is set to allow sorting and the user

clicks a column head in the DataGrid. Finally the PageIndexChanged event occurs as

part of the automatic paging of the DataGrid. We’ll discuss this event in detail later in this

chapter.

Note The use of the intrinsic in-place editing commands is

straightforward and well-documented in the Visual Studio online

Help. We won’t be discussing them in any detail here.

Implement Sorting in a DataGrid

Visual Basic .NET

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. On the General tab, select the Allow Sorting check box.

3. Click OK.

Visual Studio displays the column headings as link buttons.

Microsoft ADO.Net – Step by Step 306

4. Press F7 to open the code editor for the page.

5. Select dgCategories in the Control Name combo box, and then select

SortCommand in the Method Name combo box.

Visual Studio adds the event handler to the code.

6. Add the following lines to the event handler:

7. Me.dvCategories.Sort = e.SortExpression

8. DataBind()

9. Press F5 to run the application.

Visual Studio displays the page in the default browser.

10. Click the Description column heading.

The page is displayed with the DataGrid sorted by Description.

11. Close the browser.

12. Close the code editor and the form designer.

Visual C# .NET

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. On the General tab, select the Allow Sorting check box.

Microsoft ADO.Net – Step by Step 307

3. Click OK.

Visual Studio displays the column headings as link buttons.

4. Display the DataGrid events in the Properties Window, and doubleclick

the SortCommand property.

Visual Studio opens the code editor window and adds the event handler to the

code.

5. Add the following lines to the event handler:

6. this.dvCategories.Sort = e.SortExpression;

DataBind();

7. Press F5 to run the application.

Visual Studio displays the page in the default browser.

8. Click the Description column heading.

The page is displayed with the DataGrid sorted by Description.

Microsoft ADO.Net – Step by Step 308

9. Close the browser.

10. Close the code editor and the form designer.

Using the DataList Control

As we’ve seen, the DataGrid has a default structure. You need to use templates only

where your application requires advanced formatting. The DataList doesn’t assume any

structure and requires that you specify at least the ItemTemplate section before it can

display any data.

The DataList control is bound in the same way as the DataGrid control: by setting the

DataSource property, the DisplayMember property (if necessary), and, optionally, the

DataKeyField property.

The DataList control supports the following templates:

§ HeaderTemplate

§ FooterTemplate

§ ItemTemplate

§ AlternatingItemTemplate

§ SeparatorTemplate

§ SelectedItemTemplate

§ EditItemTemplate

The HeaderTemplate and FooterTemplate are identical to the corresponding templates

in the DataGrid. Unlike the DataGrid, the four Item templates do not necessarily

correspond to a column, only to a single row in the data source. The SeparatorTemplate

is used when the contents of the DataList are displayed as flowed text. We’ll examine

flowed text later in this chapter.

Add a DataList to a Web Form

1. In the Solution Explorer, right-click DataList.aspx, and choose Set as

Start Page.

2. Double-click the file.

Visual Studio displays the Web form in the form designer.

Microsoft ADO.Net – Step by Step 309

3. Drag a DataList control from the Web Form tab of the Toolbox onto the

form designer.

Visual Studio adds a placeholder for the DataList control.

4. In the Properties window, set the DataSource property of the DataList

to dsCategories1, and then set its DataMember property to

Categories.

5. Right -click the DataList in the form designer. On the context menu,

select Edit Template, and then on the submenu, select Item

Templates.

Visual Studio displays the Template editor.

Microsoft ADO.Net – Step by Step 310

6. Drag a Label control from the Toolbox onto the ItemTemplate section

of the Template editor.

7. In the Properties Window, select the (DataBindings) property and click

the Ellipsis button.

Visual Studio opens the DataBindings dialog box.

8. Expand the Container node and the DataItem node, and then select

CategoryName.

Microsoft ADO.Net – Step by Step 311

9. Click OK.

10. Right -click the DataList control, and then on the context menu, select

End Template Editing.

Visual Studio displays the bound item in the DataList placeholder.

11. Press F5.

Visual Studio displays the page in the default browser.

12. Close the browser.

Microsoft ADO.Net – Step by Step 312

The DataList control doesn’t presuppose a table layout, although that is the default

layout. There are two options for the layout of the data in the DataList, which is controlled

by the RepeatLayout property. If the RepeatLayout property is set to Table, the data

items are displayed as an HTML table. If the RepeatLayout property is set to Flow, the

items are included in-line as part of the document’s regular flow of text.

If the DataList values are displayed as a table, the RepeatDirection property controls the

way in which the table will be filled. A value of Vertical fills the table cells from top to

bottom, like a newspaper column, while setting the RepeatDirection property to

Horizontal fills the cells from left to right, like a calendar. The actual number of columns

is determined by the RepeatColumns property.

Display a DataList as Flowed Text

1. Select the DataList control in the form designer.

2. In the Properties window, set the RepeatLayout property to Flow, set

the RepeatColumns property to 3, and then set the RepeatDirection

property to Vertical.

3. Right -click the DataList in the form designer, select Edit Template on

the context menu, and then on the submenu, select Separator

Template.

Visual Studio displays the Template editor.

4. Add a comma and a space to the template.

5. Right -click the Template editor, and then on the context menu, select

End Template Editing.

Visual Studio displays the data items separated by the comma and a space.

6. Increase the width of the control to about the width of a browser page.

7. Press F5.

Microsoft ADO.Net – Step by Step 313

Visual Studio displays the page in the default browser.

8. Close the browser.

9. Close the form designer.

Moving Through Data

Whenever performance and scalability are issues, it’s important to limit the amount of

data displayed on a single page. For usability reasons, you should always limit the

amount of data that is displayed, no matter what the environment—users don’t

appreciate having to wade through masses of data to find the single bit of information

they require.

One common technique in the Internet environment for limiting the amount of data on a

single Web page is to display only a fixed number of rows and allow the user to move

forward and backward through the DataSet. This technique is usually referred to as

paging.

The Web form DataGrid control provides intrinsic support for paging by using the three

methods shown in Table 13-4.

Table 13-4: DataGrid Paging Methods

Method Description

Default Displays either

Next and

Previous buttons

or page

Paging/Default numbers as part

of the DataGrid;

the

CurrentPageIndex

Navigation property is

updated by the

DataGrid

Default Navigation

buttons are

outside the grid,

and the

Paging/Custom Navigation CurrentPageIndex

property is set

manually

Custom Paging Navigation

Microsoft ADO.Net – Step by Step 314

Table 13-4: DataGrid Paging Methods

Method Description

buttons are

outside the grid,

and all paging is

handled within

application code

The simplest method is, of course, to use the DataGrid control’s Default Paging/Default

Navigation method, but the custom options are only slightly more difficult to implement.

DataGrid paging is controlled by two of its properties. The PageSize property, which

defaults to 10, determines the number of items to display. The CurrentPageIndex

property determines the set of rows that will be displayed when the page is rendered.

Though it doesn’t control paging, the read-only property PageCount returns the total

number of pages of data in the data source.

When the user selects either one of the default navigation buttons, ASP.NET raises a

PageIndexChanged event. The event arguments parameter of this event includes a

NewPageIndex property. Rendering the new page in the DataGrid is as simple as setting

the DataGrid control’s CurrentPageIndex property to the value of NewPageIndex and

calling the DataBind method.

Implement Default Paging in a DataGrid Control

Visual Basic .NET

1. In the Solution Explorer, right-click DataGrid.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click DataGrid.aspx.

Visual Studio displays the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the Property Builder.

4. Select Paging in the left pane of the Property Builder.

Visual Studio displays the Paging properties.

5. Select the Allow Paging check box, and then set the Page Size

property to 5 rows.

Microsoft ADO.Net – Step by Step 315

6. Click OK.

7. Press F7 to display the code editor.

8. Select dgCategories in the Control Name combo box, and then select

PageIndexChanged in the Method Name combo box.

Visual Studio adds the event handler to the code.

9. Add the following lines to the procedure:

10. Me.dgCategories.CurrentPageIndex = e.NewPageIndex

DataBind()

11. Press F5.

Visual Studio displays the page in the default browser.

12. Click the Next (“>”) button.

Visual Studio displays the remaining 3 rows in the DataGrid.

13. Close the browser.

14. Close the code editor and the form designer.

Visual C# .NET

1. In the Solution Explorer, right-click DataGrid.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click DataGrid.aspx.

Visual Studio displays the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the Property Builder.

Microsoft ADO.Net – Step by Step 316

4. Select Paging in the left pane of the Property Builder.

Visual Studio displays the Paging properties.

5. Select the Allow Paging check box, and then set the Page Size

property to 5 rows.

6. Click OK.

7. Display the DataGrid events in the Properties Window, and doubleclick

the PageIndexChanged property.

Visual Studio opens the code editor window adds the event handler to the

code.

8. Add the following lines to the procedure:

9. this.DataGrid1.CurrentPageIndex = e.NewPageIndex;

DataBind();

10. Press F5.

Visual Studio displays the page in the default browser.

11. Click the Next (“>”) button.

Visual Studio displays the remaining 3 rows in the DataGrid.

Microsoft ADO.Net – Step by Step 317

12. Close the browser.

13. Close the code editor and the form designer.

Web forms don’t implement a BindingContext property that maintains a reference to a

current position in a data source. It’s easy enough, however, to maintain a Position

property, stored either in the Session state or in the Page object’s ViewState, and handle

the data manipulation manually.

You might use this technique, for example, if you want to display only a single row on the

Web page, but allow the user to navigate through all the rows by using the same

navigation buttons that are typically available on a Windows form.

Implement Manual Navigation on a Web Form

Visual Basic .NET

1. In the Solution Explorer, right-click Position.aspx, and then select Set

as Start Page.

2. Double-click the file.

Visual Studio displays the page in the form designer.

3. Press F7 to display the code editor.

4. Add the following global declaration to the top of the class:

5. Public Position as Integer

6. Add the following lines to the Page_Load Sub:

7. If Me.IsPostBack Then

8. Me.dsCategories1 = CType(ViewState(“dsCategories”),

DataSet)

9. Me.Position = CType(ViewState(“Position”), Integer)

Microsoft ADO.Net – Step by Step 318

10. Else

11. Me.daCategories.Fill(Me.dsCategories1.Categories)

12. ViewState(“dsCategories”) = Me.dsCategories1

13. ViewState(“Postion”) = 0

14. End If

Me.DataBind()

This code is very similar to the procedure we used in Chapter 12 to store the

DataSet with the page, but we’re also storing the value of the new variable,

Position.

15. Select (Base Class Events) in the Control Name combo box, and

then select DataBinding in the Method Name combo box.

Visual Studio adds the event handler to the code.

16. Add the following lines to the procedure:

17. Dim dr As DataRow

18.

19. dr = Me.dsCategories1.Categories.DefaultView(Position).Row

20. Me.txtCatID.Text = DataBinder.Eval(dr, “CategoryID”)

21. Me.txtName.Text = DataBinder.Eval(dr, “CategoryName”)

Me.txtDescription.Text = DataBinder.Eval(dr, “Description”)

The first two lines declare a local variable, dr, and set it to the row of the

Categories table specified by the Position variable. The next three bind the

value of columns in the row to the Text properties of the appropriate controls.

22. Select btnNext in the Control Name combo box, and then select

Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

23. Add the following lines to the procedure:

24. If Me.Position < Me.dsCategories1.Categories.Count Then

25. Me.Position += 1

26. ViewState(“Position”) = Me.Position

27. DataBind()

End If

The code checks that the current value of Position is less than the number of

rows in the Categories table, and if so, it increments the value and stores it to

the ViewState.

28. Select btnPrevious in the Control Name combo box, and then select

Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

29. Add the following lines to the procedure:

30. If Me.Position > 0 Then

31. Me.Position -= 1

32. ViewState(“Position”) = Me.Position

33. DataBind()

End If

34. Press F5.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 319

35. Click the Next button.

The page displays the next category.

36. Click the Previous button.

The page displays the previous category.

37. Close the browser.

38. Close the code editor and the form designer.

Visual C# .NET

1. In the Solution Explorer, right-click Position.aspx, and then select Set

as Start Page.

2. Double-click the file.

Visual Studio displays the page in the form designer.

Microsoft ADO.Net – Step by Step 320

3. Press F7 to display the code editor.

4. Add the following global declaration to the top of the class:

public int pagePosition;

5. Add the following lines to the Page_Load method:

6. if (this.IsPostBack == true)

7. {

8. this.dsCategories1 = (dsCategories) ViewState[“dsCategories”];

9. this.pagePosition = (int) ViewState[“pagePosition”];

10. }

11. else

12. {

13.

this.daCategories.Fill(this.dsCategories1.Categories);

14. ViewState[“dsCategories”] = this.dsCategories1;

15. ViewState[“pagePosition”] = 0;

16. }

this.DataBind();

This code is very similar to the procedure we used in Chapter 12 to store the

DataSet with the page, but we’re also storing the value of the new variable,

pagePosition.

17. In the Properties Window of the form designer, select Position from

the controls combo box. Click the Events button, and then doubleclick

the DataBinding event.

Visual Studio adds the event handler to the code.

18. Add the following lines to the Position_DataBinding procedure:

19. DataRow dr;

20.

21. dr =

this.dsCategories1.Categories.DefaultView[pagePosition].Row;

22. this.txtCatID.Text = DataBinder.Eval(dr,

“CategoryID”).ToString();

23. this.txtName.Text = (string) DataBinder.Eval(dr,

“CategoryName”);

this.txtDescription.Text = (string) DataBinder.Eval(dr, “Description”);

Microsoft ADO.Net – Step by Step 321

The first two lines declare a local variable, dr, and set it to the row of the

Categories table specified by the Position variable. The next three bind the

value of columns in the row to the Text properties of the appropriate controls.

24. In the form designer, double-click the Next button.

Visual Studio adds the event handler to the code.

25. Add the following lines to the procedure:

26. if (this.pagePosition < this.dsCategories1.Categories.Count)

27. {

28. this.pagePosition++;

29. ViewState[“pagePosition”] = this.pagePosition;

30. DataBind();

}

The code checks that the current value of Position is less than the number of

rows in the Categories table, and if so, it increments the value and stores it to

the ViewState.

31. In the Form Designer, double-click the Previous button.

Visual Studio adds the event handler to the code.

32. Add the following lines to the procedure:

33. if (this.pagePosition > 0)

34. {

35. this.pagePosition—;

36. ViewState[“pagePosition”] = this.pagePosition;

37. DataBind();

}

38. Press F5.

Visual Studio displays the page in the default browser.

39. Click the Next button.

The page displays the next category.

Microsoft ADO.Net – Step by Step 322

40. Click the Previous button.

The page displays the previous category.

41. Close the browser.

42. Close the code editor and the form designer.

Web Form Validation

The .NET Framework supports a number of validation controls which can be used to

validate data. The Web form validation controls, which are shown in Table 13-5, are

more sophisticated than the Windows Forms ErrorProvider control, which only displays

error messages. The Web form controls perform the validation checks and display any

resulting error messages.

Table 13-5: Validation Controls

Validation Control Description

RequiredFieldValidator Ensures that

the input

control

contains a

value

CompareValidator Compares

the contents

of the input

control to a

constant

value or the

contents of

another

control

Microsoft ADO.Net – Step by Step 323

Table 13-5: Validation Controls

Validation Control Description

RangeValidator Checks that

the contents

of the input

control are

between the

specified

upper and

lower

bounds,

which may

be

characters,

numbers, or

dates

RegularExpressionValidator Checks that

the contents

of the input

control

match the

pattern

specified by

a regular

expression

CustomValidator Checks that

the contents

of the input

control are

based on

custom logic

Each validation control checks for a single condition in a single control on the page,

which is known as the input control. To check for multiple conditions, multiple validation

controls can be assigned to a single input control. This is frequently the case because all

of the controls except RequiredFieldValidator consider a blank field to be valid.

The conditions specified by the validation controls assigned to a given input control will

be combined with a logical AND—all of the conditions must be met or the control will be

considered invalid. If you need to combine validation conditions with a logical OR, you

can use a CustomValidator control to manually check the value.

If the browser supports DHTML, validation will first take place on the client, and the form

will not be submitted until all conditions are met. Whether or not validation has occurred

on the client, validation will always occur on the server when a Click event is processed.

Additionally, you can manually call a control’s Validate method to validate its contents

from code.

When the page is validated, the contents of the input control are passed to the validation

control (or controls), which tests the contents and sets the control’s IsValid property to

false. If any control is invalid, the Page object’s IsValid property is also set to false. You

can check for these conditions in code and take whatever action is required.

Add a RequiredFieldValidator Control to a Form

1. In the Solution Explorer, right-click Validation.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click the Validation.aspx.

Visual Studio displays the page in the form designer.

Microsoft ADO.Net – Step by Step 324

3. Drag a RequiredFieldValidator control from the Web Forms tab of the

Toolbox to the right of the CategoryName TextBox control.

4. In the Properties window, set the RequiredFieldValidator control’s

ErrorMessage property to Name cannot be left blank , and then set its

ControlToValidate property to txtName.

5. Press F5.

Visual Studio displays the page in the default browser.

6. Click Submit.

The validation control displays the error message next to the text box.

Microsoft ADO.Net – Step by Step 325

7. Close the browser.

Chapter 13 Quick Reference

To Do this

Display data in a

DataGrid control

Set the DataSource and optionally set the

DataKeyField in the Property Builder

Control the Columns

displayed in a databound

DataGrid

In the Columns section of the Property Builder,

cancel the selection of Create columns automatically

at run time, and then select the columns to be

displayed

Implement sorting in a

DataGrid control

Bind the DataGrid to a DataView, select Allow

Sorting in the Property Builder, and then build an

event handler for the SortCommand event:

Me.myDataView.Sort = e.SortExpression

DataBind()

Display data in a

DataList control

Set the DataSource and DataMember properties of

the DataList, and the specify the data binding for

each control in the DataList control’s templates

Implement Paging in a

DataGrid control

Select a paging option from the Paging pane of the

DataGrid control’s Property Builder

Part V: ADO.NET and XML

Chapter 14: Using the XML Designer

Chapter 15: Reading and Writing XML

Chapter 16: Using ADO in the .NET Framework

Chapter 14: Using the XML Designer

Overview

In this chapter, you’ll learn how to:

§ Create an XML schema

§ Create a Typed DataSet

§ Generate a Typed DataSet from an XML schema

Microsoft ADO.Net – Step by Step 326

§ Add DataTables to an XML DataSet schema from an existing data source

§ Create DataTables in an XML DataSet schema

§ Add keys to an XML schema

§ Add relations to an XML schema

§ Create elements

§ Create simple types

§ Create complex types

§ Create attributes

In this chapter, we’ll look at the XML Designer, the Microsoft Visual Studio .NET tool that

supports the creation of XML schemas and Microsoft ADO.NET Typed DataSets.

Understanding the XML Schemas

An XML schema is a document that defines the structure of XML data. Much like a

database schema, an XML schema can also be used to validate the contents and

structure of an XML file.

An XML schema is defined using the XML Schema Definition language (XSD). XSD is

similar in structure to HTML, but whereas HTML defines the layout of a document, XSD

defines the structure and content of the data.

Note XML schemas in the Microsoft .NET Framework conform to the

World Wide Web Consortium (W3C) recommendation, as defined

at http://www.w3.org/2001/XMLSchema. Additional schema

elements that are used to support .NET Framework objects, such

as DataSets and DataRelations, conform to the schema defined at

urn:schemas-microsoft-com:xml-msdata. (Such extensions

conform to the W3C recommendation and will simply be ignored

by XML parsers that do not support them.)

XML schemas are defined in terms of elements and attributes. Elements and attributes

are very similar, and can often be used interchangeably, although there are some

distinctions:

§ Elements can contain other items; attributes are always atomic.

§ Elements can occur multiple times in the data; attributes can occur only once.

§ By using the <xs:sequence> tag, a schema can specify that elements must

occur in the order they are specified; attributes can occur in any order.

§ Only elements can be nested within <xs:choice> tags, which specify mutually

exclusive elements (that is, one and only one of the elements can occur).

§ Attributes are restricted to built-in data types; elements can be defined using

user-defined types.

By convention, elements are used for raw data, while attributes are used for metadata;

but you can use whichever best suits your purposes.

Both elements and attributes define items in terms of a type, which defines the data that

the element or attribute can validly contain. XML schemas support simple types, which

are atomic values such as string or Boolean, and complex types, which are composed of

other elements and attributes in any combination. We’ll examine types in more detail

later in this chapter.

Optionally, elements and attributes can define a name that identifies the element that is

being defined. XML element names cannot begin with a number or the letters XML, nor

can they contain spaces. Note that XML is case-sensitive, so the names MyName and

myName are considered distinct.

XML schemas are stored in text files with an XSD extension (XSD schema files). Visual

Studio provides a visual user interface for creating XML schemas, the XML Designer.

The XML tab of the XML Designer allows you to examine the contents of the XSD file

directly, while the DataSet or Schema tab provides a visual interface. Like the form

designer, the XML Designer is closely related to the XSD schema file—changes that you

make to one are reflected in the other.

Microsoft ADO.Net – Step by Step 327

Creating XML Schema and Typed DataSets

Like HTML and other markup languages descended from SGML, XML schema files are

created using tags that are delimited by angle brackets:

<tag> some text </tag>

XML schema files begin with a tag that identifies the version of XML that is being used.

.NET Framework XML schema files follow this with an <xs:schema> tag whose

targetNamespace attribute defines the namespace of all the components in this schema

and any included schemas. The <xs:schema> tag also includes references to two

namespaces—the W3C XML schema definition and the Microsoft extensions.

This standard header is created automatically by the XML Designer. If you create an

XML schema in a text editor or some other design tool, the heading has the following

structure:

<?xml version=”1.0″encoding=”utf-8″?>

<xs:schema targetNamespace=”http://tempuri.org/XMLSchema1.xsd

xmlns:xs=”http://www.w3.org/2001/XMLSchema&#8221;

xmlns:msdata=”urn:schemas-microsoft-com:xml-msdata”

>

The basic structure of the XML schema file created by the XML Schema Designer is:

<?xml version=”1.0″encoding=”utf-8″?>

<xs:schema id=”myDataSet” …>

<xs:element name=”myDataSet” msdata:IsDataSet=”true”>

<xs:complexType maxoccurs=”unbounded”>

<xs:choice>

</xs:choice>

</xs:complexType>

</xs:element>

</xs:schema>

The first two lines are the schema heading. (The xs:schema tag contains attributes that

aren’t shown.) The next tag, <xs:element>, represents the DataSet itself. It has two

attributes: name and msdata:IsDataSet. The first attribute specifies the name of the

DataSet; the second is a Microsoft schema extension that identifies the element as a

DataSet.

The next set of tags creates a complexType. ComplexTypes, which we’ll examine in

detail in this chapter, are elements that can contain other elements and attributes. Note

that this complexType element is not assigned a name—it’s used only for structural

purposes and not referred to elsewhere in the schema.

The final set of tags creates a choice group. Groups, which we’ll also examine later in

this chapter, define how individual elements can validly occur in the XML data. The

choice group creates a mutually exclusive set. The maxOccurs=”unbounded” attribute

specifies that the data can occur any number of times within the group, but because it is

a choice group, all of the data must be the same type.

The DataTables are defined as elements within the choice group. We’ll examine their

structure later in the chapter.

Visual Studio supports the creation of XML schemas and Typed DataSets interactively.

Both types of items use the XML Designer, but an XML schema will output only an XML

schema (XSD), while the DataSet will automatically generate both the schema and a

class file defining the Typed DataSet.

Microsoft ADO.Net – Step by Step 328

Creating Schemas

Like any other project component, XML schemas are added to a project by using the

Add New Item dialog box.

Add a Schema to the XML Designer

1. In Visual Studio .NET, open the SchemaDesigner project from the

Start page or the File menu.

2. On the Project menu, choose Add New Item.

Visual Studio displays the Add New Item dialog box.

3. Select XML Schema in the Templates pane, and then click Open.

Visual Studio adds an XML schema named XMLSchema1 to the project, and

then opens the XML Designer.

4. Close the XML Designer.

Creating DataSets

In previous chapters, we have seen how to generate a Typed DataSet based on

DataAdapters that have been added to the project. It’s also possible to add a DataSet to

a project and configure it manually, using the same technique we used in the previous

exercise to add an XML schema to the project.

Add a DataSet to the XML Designer

1. On the Project menu, choose Add New Item.

Visual Studio displays the Add New Item dialog box.

Microsoft ADO.Net – Step by Step 329

2. Select DataSet in the Templates pane, and then click Open.

Visual Studio adds a Typed DataSet named Dataset1 to the project, and

opens the XML Designer.

3. Select the XML tab of the XML Designer.

Visual Studio displays the XML schema source code.

4. Close the XML Designer.

When you specify a DataSet in the Add New Item dialog box, Visual Studio automatically

generates a class file from the XML schema to define the DataSet. If you create only an

XML schema, or if you import an XML schema from another source, the Typed DataSet

Microsoft ADO.Net – Step by Step 330

won’t automatically be added; but you can create it by using the Generate DataSet

command on the XML Designer’s Schema menu.

Generate a DataSet from a Schema

1. In the Solution Explorer, double-click XMLSchema1.xsd.

Visual Studio opens the (blank) schema in the XML Designer.

2. On the Schema menu, choose Generate Dataset.

Visual Studio creates a Typed DataSet class based on the XML schema.

3. Expand XMLSchema1 to display the class file in the Solution Explorer.

You may need to click the Show All Files button on the Solution

Explorer toolbar.

4. Close the form designer.

Understanding Schema Properties

The XML Designer exposes two sets of properties for schemas: DataSet properties,

which are available only for DataSet schemas, and miscellaneous properties that are

defined by the W3C recommendation.

The properties exposed by the Microsoft schema extensions are shown in Table 14-1.

The IsDataSet property identifies this particular element as the root of the Typed DataSet

definition. The XML Designer will generate an error if more than one element has

IsDataSet set to true.

The CaseSensitive, dataSetName, and Locale properties map directly to their DataSet

counterparts, while the key property is used internally by the .NET Framework.

Table 14-1: Microsoft Schema Extension Properties

Property Description

CaseSensitive Controls

whether the

DataSet is

casesensitive.

Note that

this affects

only the

DataSet.

The XML

Microsoft ADO.Net – Step by Step 331

Table 14-1: Microsoft Schema Extension Properties

Property Description

schema is

always

casesensitive

dataSetName The name of

the Typed

DataSet

based on

the XML

schema

IsDataSet Defines the

element as

the root of a

DataSet

key Set of

unique

constraints

defined on

the DataSet

Locale Locale

information

used to

compare

strings in

the DataSet

The Misc section of the Properties window exposes the attributes of the schema element

defined by the W3C recommendation, as shown in Table 14-2. The id,

targetNamespace, and version properties set the value of these two attributes for the

schema, while the remaining properties define the behavior of other schema

components.

Table 14-2: XML Schema Properties

Property Description

attributeFormDefault Determines

whether

attribute

names from

the target

namespace

must be

namespacequalified

blockDefault Sets the

default

value for the

block

attribute of

elements

and

complex

types in the

schema

Microsoft ADO.Net – Step by Step 332

Table 14-2: XML Schema Properties

Property Description

namespace

elementFormDefault Determines

whether

element

names from

the target

namespace

must be

namespacequalified

finalDefault Sets the

default

value for the

final

attribute of

elements

and

complex

types in the

schema

namespace

id The value of

the

element’s ID

attribute

import Collection of

imported

schemas

include Collection of

included

schemas

NameSpace Collection of

namespace

s declared

in the

schema

targetNamespace The target

namespace

of the

schema

version The value of

the

element’s

version

attribute

The attributeFormDefault and elementFormDefault properties determine whether

attribute and element names, respectively, must be preceded with a namespace

identifier and a colon (for example, name=”myDS:myName” as opposed to

name=”myName”).

Microsoft ADO.Net – Step by Step 333

The blockDefault and finalDefault properties define the default values for the block and

final attributes of elements within the namespace. We’ll examine these attributes in the

following section.

Finally the import, include, and NameSpace properties contain collections of

namespaces that are imported, included, and declared in the schema, respectively.

Examine the Namespaces Declared in an XML Schema

1. In the Solution Explorer, double-click Dataset1.xsd.

Visual Studio opens the schema in the XML Designer.

2. In the Properties window, select Namespace, and then click the

Ellipsis button.

The XML Designer displays the XMLNamesSpace Collection Editor.

3. In the Members pane, select xs.

The XMLNamesSpace Collection Editor displays the NameSpace property

and qualifier of the W3C XSD recommendation.

Working with DataTables in the XML Designer

In the previous section, we examined the structure of tags within a DataSet schema.

Remember that we said that DataTables are defined as elements within a choice group.

The DataTable itself has the following nominal structure:

<xs:element name=”myTable”>

<xs:complexType>

<xs:sequence>

<xs:element name:”Column1″ type:”xs:string” />

<xs:element name:”Column2″ type:”xs:Boolean” />

</xs:sequence>

</xs:complexType>

</xs:element>

The structure is similar to the nominal structure of a schema: an element is created and

assigned the name of the table. Within the element is an unnamed complex type, and

within that is an XML group, and within that are the column elements. The XML group

used for a schema is a choice, which makes element types mutually exclusive. The

DataTable structure uses a sequence group, which ensures that the nested elements will

be in the order specified.

Microsoft ADO.Net – Step by Step 334

Adding DataTables to the XML Designer

Visual Studio supports a number of methods for creating DataTables in the XML

Designer. We’ve been using one of them, generating a DataSet based on DataAdapters

that have been added to a form, for several chapters.

You can also drag an existing table, view, or stored procedure from the Server Explorer

to the XML Designer Schema tag, or create a DataTable from scratch. As we’ll see in

Chapter 15, you can also infer schemas from XML data at run time.

Add a Table or View to a Schema

1. In the XML Designer, open the Dataset1 schema (if necessary), and

then select the DataSet tab.

2. In the Server Explorer, expand the connection to the SQL Northwind

database, and then expand the Tables node.

3. Select the Categories table, and drag it onto the XML Designer.

Visual Studio adds the table to the schema.

4. Select the XML tag of the XML Designer.

Visual Studio displays the XML schema source code.

Create a Table from Scratch

1. In the XML Designer, select the DataSet tab.

Microsoft ADO.Net – Step by Step 335

2. In the XML Schema section of the Toolbox, drag an Element onto the

design surface.

Visual Studio adds a new Element to the schema.

3. The element name, element1, is selected on the design surface.

Change it to Products.

4. Click the first column of the first row of the element, and then expand

the drop-down list.

5. Select element from the drop-down list.

The XML Designer adds a nested element to the Products element.

Microsoft ADO.Net – Step by Step 336

6. Change the element name to ProductID.

Creating Keys

The XML Designer supports three different tags that pertain to entity and referential

integrity: primary keys, keyrefs, and unique keys. Primary keys guarantee uniqueness

within a DataSet. A <keyref> tag is essentially a foreign key reference and is used to

implement a one-to-many relationship. Unique keys guarantee uniqueness, but they are

not typically used for referential integrity.

Creating Primary Keys

The W3C recommendation supports the <key> tag, which specifies that the values of the

specified element must be unique, always present, and not null. The Microsoft schema

extensions add an attribute to this tag, msdata:PrimaryKey, which identifies the key as

being the primary key for the DataTable.

The scope of a key is the scope of the element that contains it. In a .NET Framework

DataSet schema, keys are defined at the DataSet level, which means that the key needs

to be unique, not just within a DataTable, but within the DataSet as a whole.

Primary keys are added to a DataTable by using the Edit Key dialog box, which is

displayed if you drag a key onto an element or choose Add Key from the Schema menu

or an element’s context menu. The Add Key dialog box allows you to specify multiple

fields for a key, if necessary, and also specify whether the key should accept null values

or be designated as the primary key for the DataTable.

Microsoft ADO.Net – Step by Step 337

Add a Primary Key to a DataTable

1. On the Schema menu, point to Add, and then choose New Key.

Visual Studio displays the Edit Key dialog box.

2. Change the name of the key to ProductsPK, and then select the

Dataset Primary Key check box.

Microsoft ADO.Net – Step by Step 338

3. Click OK.

The XML Designer adds the primary key to the Products element.

4. Select the XML tab of the XML Designer.

The XML Designer displays the code for the new key.

Creating Unique Keys

Primary keys are, as we’ve seen, required elements that must be unique within the

DataSet and cannot be null. There can be only one primary key defined for a DataTable.

Unique keys differ from primary keys in that they can allow nulls, and you can define

multiple unique keys for any given DataTable.

Unique keys are added by using the same Edit Key dialog box that is used to add

primary keys.

Add a Unique Key to a DataTable

1. Select the DataSet tab of the XML Designer.

2. Drag a key tag from the XML Schema tab of the Toolbox onto the

Categories element.

The XML Designer displays the Edit Key dialog box.

3. Change the name of the key to CategoryName.

4. Select CategoryID in the Fields pane, expand the drop-down list, and

then select the CategoryName field.

Microsoft ADO.Net – Step by Step 339

5. Click OK.

The XML Designer adds the new key to the Categories element.

6. Select the XML tab of the XML Designer.

Visual Studio displays the XML schema code.

Microsoft ADO.Net – Step by Step 340

Creating Relations

KeyRefs are implemented as Relations in the XML Designer. A Relation translates

directly to a DataRelation within a DataSet. Relations are added to a DataSet by using

the Edit Relation dialog box, which, like the Edit Key dialog box, can be displayed by

dragging a Relation from the Toolbox or by choosing New Relation on the Schema

menu.

In addition to the basic relationship information, the Edit Relation dialog box allows you

the option of creating a foreign key constraint only. If you select this option, the DataSet

class produced from the XML schema will be slightly more efficient, but you will not be

able to use the GetChildRows and GetParentRows methods to reference related data.

In addition, the Edit Relation dialog box allows you to specify three referential integrity

rules: Update, Delete, and Accept/Reject. These rules determine what happens when

primary key rows are updated or deleted, or when changes are accepted or rejected.

The possible values for these rules are shown in Table 14-3. The Accept/Reject rule

supports only Cascade and None.

Table 14-3: Referential Integrity Rules

Rule Description

Cascade Deletes or

updates

related rows

SetNull Sets the

foreign key

values in

related rows

to null

SetDefault Sets the

foreign key

values in

related rows

to their

default

values

None Takes no

action on

related rows

Add a Relation to a DataSet

1. Select the DataSet tab of the XML Designer.

Microsoft ADO.Net – Step by Step 341

2. Select the Categories element.

3. The XML Designer displays the Edit Relation dialog box.

4. Change the Relation name to CategoryProducts.

5. Choose Products in the Child Element combo box.

Microsoft ADO.Net – Step by Step 342

6. Click OK.

Visual Studio adds the Relation to the XML Schema Designer.

Working with Elements

Throughout this chapter, we’ve been talking about elements, and even creating them,

without examining them in any detail. We’ll correct that now. An element in a XML

schema DataSet is a description of an item of data.

At its simplest, an element consists only of the <xs:element> tag:

<xs:element />

However, most elements, unless they’re being used only as containers, contain a name

and type attribute:

<xs:element name=”productID” type=”xs:integer” />

Microsoft ADO.Net – Step by Step 343

Elements may also contain other tags. (Help states that ‘elements can contain other

elements,’ but that’s not strictly true. Specifically, ‘other elements’ doesn’t refer to

element tags.) The tags that can be nested within an element tag are:

§ <xs:annotation>

§ <xs:complexType>

§ <xs:key>

§ <xs:keyref>

§ <xs:simpleType>

§ <xs:unique>

As we saw in the previous section, the <xs:key>, <xs:keyref>, and <xs:unique> tags are

used to define constraints. The <xs:annotation> tag, as might be expected, is used to

add information to be used by applications or displayed to users.

The <xs:complexType> type is a container tag, used to group other tags. We’ve seen it

used in the structure of both schemas and DataTables in the XML Designer. The

<xs:simpleType> tag defines a data type by specifying valid values, based on other

types. We’ll examine both of these tags in detail later in this chapter.

Element Properties

As usual, the XML Designer exposes the attributes of the <xs:element> tag as

properties. The attributes exposed by the W3C recommendation are shown in Table 14-

4.

Table 14-4: XML Schema Element Properties

Property Description

abstract Indicates

whether an

instance of

the element

can appear

in a

document

block Prevents

elements of

the specified

type of

derivation

from being

used in

place of the

element

default The default

value of the

element

final The type of

derivation

fixed The

predetermin

ed,

unchangeab

le value of

the element

form The form of

the element

id The ID of

Microsoft ADO.Net – Step by Step 344

Table 14-4: XML Schema Element Properties

Property Description

the element

key The

collection of

unique keys

defined for

this element

maxOccurs The

maximum

number of

times the

element can

occur within

the

containing

element

minOccurs The

minimum

number of

times the

element can

occur within

the

containing

element

name The name of

the element

nillable Determines

whether an

explicit nil

can be

assigned to

the element

ref The name of

an element

declared in

the

namespace

substitutionGroup The name of

the element

for which

this element

can be

substituted

type The data

type of the

element

The abstract, block, final, form, ref, and substitutionGroup properties pertain to the

derivation of elements from other elements. Their use is outside the scope of this book,

but they are extensively documented in online Help and other XML documentation

sources.

Microsoft ADO.Net – Step by Step 345

The name and id properties are used to identify the element. The ID attribute must be

unique within the XML schema. The name property is also shown in the visual

representation of the element.

The remaining properties define the value of the element. Of these, the most important

property is type, which defines the data type of the element. The type of an element can

be either a built-in XML type or a simple or complex type defined elsewhere in the XML

schema. Like the name prop-erty, the type property is shown in the visual display of the

element.

The default property, not surprisingly, specifies a default value if none is specified, while

the fixed property specifies a value that the element must always contain. Both of these

properties must be of the data type specified by the type attribute, and they are mutually

exclusive. The nillable property indicates whether the value can be set to a null value or

omitted.

Finally the maxOccurs and minOccurs properties specify the maximum and minimum

number of times the element can occur, respectively. The maxOccurs property can be

set to either a non-negative integer or the string ‘unbounded,’ which indicates that there

is no limit to the number of occurrences.

In addition to the element attributes defined by the W3C recommendation, the Microsoft

schema extensions expose the properties shown in Table 14-5. All of these coincide

directly to their counterparts in the DataColumn object.

Table 14-5: Microsoft Schema Extension Element Properties

Property Description

AutoIncrement Determines

whether the

value

automaticall

y

increments

when a row

is added

AutoIncrementSeed Sets the

starting

value for an

AutoIncrem

ent element

AutoIncrementStep Determines

the step by

which

AutoIncrem

ent

elements

are

increased

Caption Specifies

the display

name for an

element

Expression A

DataColumn

expression

for the

element

Microsoft ADO.Net – Step by Step 346

Table 14-5: Microsoft Schema Extension Element Properties

Property Description

ReadOnly Determines

whether

element

values can

be modified

after the row

has been

added to the

DataTable

Define the type Property of an Element

1. Select the ProductID nested element in the XML Designer, expand the

type drop-down list, and then select int.

2. In the Properties window, select the AutoIncrement property, expand

the drop-down list, and then choose true.

3. Save and close DataSet1.

Working with Types

As we’ve seen, the type property of an element defines the data type of an element or

attribute. XML schemas support two kinds of data types: simple and complex. A simple

type resolves to an atomic value, while a complex type contains other complex types,

elements, or attributes.

Microsoft ADO.Net – Step by Step 347

The W3C recommendation allows XML schemas to define user-defined types. As we’ve

seen, the nominal structure of a .NET Framework DataSet XML schema uses userdefined

complex types to define the columns of a table.

The XML Designer supports the creation of user-defined types as well. User-defined

types are useful for encapsulating business rules. For example, if a ShipMethod element

is limited to the values USPS or 2nd Day Air, a user-defined enumeration can be used to

restrict the values rather than adding another DataTable to the schema.

Simple Types

The XML schema recommendation supports two different kinds of simple types (primitive

and derived) and supports the creation of new, user-defined simple types. Primitive types

are the fundamental types. Examples of primi-tive types include string, float, and

Boolean. Derived types are defined by limiting the valid range of values for a primitive

type. An example of a built-in derived type is positiveInteger, which is an integer that

allows only values greater than zero.

Like derived types, user-defined simple types restrict the values of existing simple types

by limiting the valid range of values. User-defined simple types can be derived from base

types by using any of the methods shown in Table 14-6.

Table 14-6: Simple Type Derivation Methods

Method Description

restriction Restricts the

range of

values to a

subset of

those

allowed by

the base

type

list Defines a

list of values

of the base

type that are

valid for the

type

union Defines a

type by

combining

the values

of two or

more other

simple types

Of the available derivation methods, restriction is the most common. The valid range of

values of a simple type is restricted by applying facets to the type. A facet is much like an

attribute, but it specifically limits the valid range of values for a user-defined type. Table

14-7 describes the various facets available for restriction of values.

Table 14-7: Data Type Facets

Facet Description

enumeration Constrains data

to the specified

set of values.

fractionDigits Specifies the

maximum

number of

Microsoft ADO.Net – Step by Step 348

Table 14-7: Data Type Facets

Facet Description

decimal digits.

length Specifies the

nonNegativeInte

ger length of the

value. The

exact meaning

is determined

by the data

type.

maxExclusive Specifies the

exclusive upperbound

value—

all values must

be less than this

value.

maxInclusive Specifies the

inclusive upperbound

value—

all values must

be equal to or

less than this

value.

maxLength Specifies the

nonNegativeInte

ger maximum

length of the

value. The

exact meaning

is determined

by the data

type.

minExclusive Specifies the

exclusive lowerbound

value—

all values must

be greater than

this value.

minInclusive Specifies the

inclusive lowerbound

value—

all values must

be equal to or

greater than this

value.

minLength Specifies the

nonNegativeInte

ger minimum

length of the

value. The

exact meaning

is determined

by the data

Microsoft ADO.Net – Step by Step 349

Table 14-7: Data Type Facets

Facet Description

type.

pattern A regular

expression

specifying a

pattern that the

value must

match.

totalDigits Specifies the

nonNegativeInte

ger maximum

number of

decimal digits

for the value.

whiteSpace Specifies how

white space in

the value is to

be handled.

Create a simpleType Using the length Facet

1. In the Solution Explorer, double-click XMLSchema1.

Visual Studio opens the schema in the XML Designer.

2. Drag a simpleType control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds a simple type to the schema.

3. Change the name of the type to IDString.

4. Click the first column of the first row of the type, and then expand the

drop-down list.

5. From the drop-down list, select facet.

6. From the drop-down list in the second column, select length.

Microsoft ADO.Net – Step by Step 350

7. In the third column, type 2.

The XML Designer creates a user-defined simpleType that limits the length of

a string to two characters.

8. Select the XML tab of the XML Designer.

The XML Schema Designer displays the XML code for the simpleType

definition.

Complex Types

Complex types are user-defined types that contain elements, attributes, and group

declarations. The elements of a complex type can be other complex types, allowing

infinite nesting.

We’ve already seen unnamed complex types used to define the columns of an ADO.NET

DataTable. A DataTable uses a sequence group to specify that the elements contained

within the group must occur in a particular order. The W3C XML schema

recommendation supports two other types of element groups, choice and all, as shown

in Table 14-8.

Microsoft ADO.Net – Step by Step 351

Table 14-8: Element Group Types

Type Description

sequence Elements

must occur

in the order

specified

choice Only one of

the

elements

specified

can occur

all Either all of

the

elements

specified

must occur,

or none of

them can

occur

Create a complexType Containing a Choice Group

1. Drag a complexType control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds a complex type to the schema.

2. Change the name of the type to ChoiceGroup.

3. Click the first column of the first row of the type, and then expand the

drop-down list.

Microsoft ADO.Net – Step by Step 352

4. Select choice from the drop-down list.

The XML Designer adds a choice group to the type.

5. Add two elements, Value1 and Value2, to the choice group.

6. Select the XML tab of the XML Designer.

The XML Designer displays the XML code for the complex type.

Microsoft ADO.Net – Step by Step 353

Working with Attributes

Attributes are similar to elements, with some restrictions. Attributes cannot contain other

tags, they cannot be used to derive simple types, and they cannot be included in element

groups. They do, however, require slightly less storage than elements, and for that

reason, they can be useful if you’re working outside the context of ADO.NET objects.

Attribute Properties

Attributes expose the same extensions to the W3C recommendation as elements. The

W3C properties exposed by attributes are shown in Table 14-9. The attribute property

set is a subset of the properties exposed by the element. Because attributes cannot be

used to derive types, the properties that control derivation are not exposed.

Table 14-9: Attribute Properties

Property Description

default The default

value of the

element

fixed The

predetermin

ed,

unchangeab

le value of

the element

form The form of

the element,

either

qualified or

unqualified

id The ID of

the element;

must be

unique

within the

document

Name The name

(NCName)

of the

element

Ref The name of

an element

declared in

the

namespace

Type The data

type of the

element

Use Specifies

how the

attribute is

used

Attributes expose one property, use, that is not exposed by elements. The use property

determines how the attribute can be used when it is included in elements and complex

Microsoft ADO.Net – Step by Step 354

types. The use property can be assigned to one of three values: optional, prohibited, or

required.

The meanings of optional and required are self-evident. Prohibited is used to exclude the

attribute from user-defined types based on a complex type that includes the attribute.

Create an Attribute

1. Drag an Attribute control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds an attribute to the schema.

2. Change the name of the attribute to companyName.

3. In the Properties window, set the fixed property to XML, Inc.

The attribute, which will always have the value XML, Inc., is added to the

schema.

4. Select the XML tab of the XML Designer.

Visual Studio displays the XML source code for the attribute.

Chapter 14 Quick Reference

To Do this

Create an XML

schema

Choose XML Schema in the Add New Item dialog box

Create a Typed

DataSet

Choose DataSet in the Add New Item dialog box

Microsoft ADO.Net – Step by Step 355

To Do this

Generate a

Typed DataSet

from an XML

schema

Choose Generate DataSet on the Schema menu of the

XML Designer

Add DataTables

from an existing

data source

Drag the table, view, or stored procedure from the Solution

Explorer to the XML Designer

Create

DataTables

Add an element to the XML Designer, and create columns

as nested elements

Add keys to an

XML schema

Select the DataTable and then choose New Key on the

Schema menu, or drag a Key control from the XML

Schema tab of the Toolbox onto the element

Add relations to

an XML schema

Select the DataTable, and then choose New Relation on

the Schema menu, or drag a Relation control from the

XML Schema tab of the Toolbox onto the element

Create elements Drag an element control from the XML Schema tab of the

Toolbox onto the design surface

Create simple

types

Drag a simpleType control from the XML Schema tab of

the Toolbox onto the design surface

Create complex

types

Drag a complexType control from the XML Schema tab of

the Toolbox onto the design surface

Create attributes Drag an attribute control from the XML Schema tab of the

Toolbox onto the design surface

Chapter 15: Reading and Writing XML

Overview

In this chapter, you’ll learn how to:

§ Retrieve an XML Schema from a DataSet

§ Create a DataSet Schema using ReadXmlSchema

§ Infer the Schema of an XML Document

§ Load XML Data using ReadXml

§ Create an XML Schema using WriteXmlSchema

§ Write Data to an XML Document

§ Create a synchronized XML View of a DataSet

In the previous chapter, we looked at the XML Schema Designer, the Microsoft Visual

Studio .NET tool that supports the creation of XML schemas and Typed DataSets. In this

chapter, we’ll look at the DataSet methods that support reading and writing data from an

XML data stream.

The Microsoft .NET Framework provides extensive support for manipulating XML, most

of which is outside the scope of this book. In this chapter, we’ll examine only the

interface between XML and Microsoft ADO.NET DataSets.

Microsoft ADO.Net – Step by Step 356

Understanding ADO.NET and XML

The .NET Framework provides a complete set of classes for manipulating XML

documents and data. The XmlReader and XmlWriter objects, and the classes that

descend from them, provide the ability to read and optionally validate XML. The

XmlDocument and XmlSchema objects and their related classes represent the XML

itself, while the XslTransform and XPathNavigator classes support XSL Transformations

(XSLT) and apply XML Path Language (XPath) queries, respectively.

In addition to providing the ability to manipulate XML data, the XML standard is

fundamental to data transfer and serialization in the .NET Framework. For the most part,

this happens behind the scenes, but we’ve already seen that ADO.NET Typed DataSets

are represented using XML schemas.

Additionally, the ADO.NET DataSet class provides direct support for reading and writing

XML data and schemas, and the XmlDataDocument provides the ability to synchronize

XML data and a relational ADO.NET DataSet, allowing you to manipulate a single set of

data using both XML and relational tools. We’ll explore these techniques in this chapter.

Using the DataSet XML Methods

As we’ve seen, the .NET Framework exposes a set of classes that allow you to

manipulate XML data directly. However, if you need to use relational operations such as

sorting, filtering, or retrieving related rows, the DataSet provides an easier mechanism.

Furthermore, the XML classes don’t support data binding, so if you intend to display the

data to users, you must use the DataSet XML methods.

Fortunately, the choice between treating any given set of data as an XML hierarchy or

relational DataSet isn’t mutually exclusive. As we’ll see later in this chapter, the

XmlDataDocument allows you to manipulate a single set of data by using either or both

sets of tools.

The GetXml and GetXmlSchema Methods

Perhaps the most straightforward of the XML methods supported by the DataSet are

GetXml and GetXmlSchema, which simply return the XML data or XSD schema as a

string value.

Retrieve a DataSet Schema Using GetXmlSchema

Visual Basic .NET

1. Open the XML project from the Start page or the File menu.

2. In the Solution Explorer, double-click GetXml.vb.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 357

3. Double-click Show Schema.

Visual Studio opens the code editor and adds the Click event handler.

4. Add the following code to the handler:

5. Dim xmlStr As String

6.

7. xmlStr = Me.dsMaster1.GetXmlSchema()

Me.tbResult.Text = xmlStr

8. Press F5 to run the application.

Visual Studio displays the application window.

9. Click GetXml.

Visual Studio displays the GetXml form.

Microsoft ADO.Net – Step by Step 358

10. Click Show Schema.

The application displays the DataSet schema in the text box.

11. Close the GetXml form and the application.

Visual C# .NET

1. Open the XML project from the Start page or the File menu.

2. In the Solution Explorer, double-click GetXml.cs.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 359

3. Double-click Show Schema.

Visual Studio opens the code editor and adds the Click event handler.

4. Add the following code to the handler:

5. string xmlStr;

6.

7. xmlStr = this.dsMaster1.GetXmlSchema();

this.tbResult.Text = xmlStr;

8. Press F5 to run the application.

Visual Studio displays the application window.

9. Click GetXml.

Visual Studio displays the GetXml form.

Microsoft ADO.Net – Step by Step 360

10. Click Show Schema.

The application displays the DataSet schema in the text box.

11. Close the GetXml form and the application.

Retrieve a DataSet’s Data Using GetXml

Visual Basic .NET

1. In the code editor, select btnData in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the handler:

3. Dim xmlStr As String

Microsoft ADO.Net – Step by Step 361

4.

5. xmlStr = Me.dsMaster1.GetXml

Me.tbResult.Text = xmlStr

6. Press F5 to run the application.

Visual Studio displays the application window.

7. Click GetXml.

Visual Studio displays the GetXml form.

8. Click Show Data.

Visual Studio displays the XML data in the text box.

9. Close the GetXml form and the application.

10. Close the GetXml form designer and code editor window.

Visual C# .NET

1. In the form designer, double-click Show Data.

Visual Studio displays the code editor window and adds the Click event

handler to the code.

2. Add the following code to the handler:

3. string xmlStr;

4.

5. xmlStr = this.dsMaster1.GetXml();

this.tbResult.Text = xmlStr;

6. Press F5 to run the application.

Visual Studio displays the application window.

7. Click GetXml.

Visual Studio displays the GetXml form.

8. Click Show Data.

Visual Studio displays the XML data in the text box.

Microsoft ADO.Net – Step by Step 362

9. Close the GetXml form and the application.

10. Close the GetXml form designer and code editor window.

The ReadXmlSchema Method

The DataSet’s ReadXmlSchema method loads a DataSet schema definition either from

the XSD schema definition or from XML. ReadXmlSchema supports four versions, as

shown in Table 15-1. You can pass the method a stream, a string identifying a file name,

a TextReader, or an XmlReader object.

Table 15-1: ReadXmlSchema Methods

Method Description

ReadXmlSchema(stream) Reads an

XML

schema

from the

specified

stream

ReadXmlSchema(string) Reads an

XML

schema

from the

files

specified in

the string

parameter

ReadXmlSchema(TextReader) Reads an

XML

schema

from the

specified

TextReader

ReadXmlSchema(XmlReader) Reads an

XML

schema

from the

Microsoft ADO.Net – Step by Step 363

Table 15-1: ReadXmlSchema Methods

Method Description

specified

XmlReader

ReadXmlSchema does not load any data; it loads only tables, columns, and constraints

(keys and relations). If the DataSet already contains schema information, new tables,

columns, and constraints will be added to the existing schema, as necessary. If an object

defined in the schema being read conflicts with the existing DataSet schema, the

ReadXmlSchema method will throw an exception.

Note If the ReadXmlSchema method is passed XML that does not

contain inline schema information, the method will infer the

schema according to the rules discussed in the following section.

Create a DataSet Schema Using ReadXmlSchema

Visual Basic .NET

1. In the Solution Explorer, double-click XML.vb.

Visual Studio displays the form in the form designer.

2. Double-click Read Schema.

Visual Studio opens the code editor and adds a Click event handler.

3. Add the following code to the handler:

4. Dim newDS As New System.Data.DataSet()

5. newDS.ReadXmlSchema(“masterSchema.xsd”)

6.

7. Me.daCategories.Fill(newDS.Tables(“Categories”))

8. Me.daProducts.Fill(newDS.Tables(“Products”))

9. SetBindings(newDS)

The first two lines declare a new DataSet and configure it by using the

ReadXmlSchema method based on the XSD schema that is defined in the

masterSchema.xsd file, which is in the bin folder of the project directory.

The remaining three lines fill the new DataSet and then call the SetBindings

function, passing it to the DataSet object. SetBindings, which is in the Utility

Functions region of the code editor, binds the controls on the XML form to the

DataSet provided.

10. Press F5 to run the application.

11. Click Read Schema.

Microsoft ADO.Net – Step by Step 364

The application displays the data from the new DataSet in the form’s controls.

(Note that the navigation buttons will not work because they are specifically

bound to the dsMaster1 DataSet.)

12. Close the application.

Visual C# .NET

1. In the Solution Explorer, double-click XML.cs.

Visual Studio displays the form in the form designer.

2. Double-click Read Schema.

Visual Studio opens the code editor and adds a Click event handler.

3. Add the following code to the handler:

4. System.Data.DataSet newDS = new System.Data.DataSet();

5. newDS.ReadXmlSchema(“masterSchema.xsd”);

6.

7. this.daCategories.Fill(newDS.Tables[“Categories”]);

8. this.daProducts.Fill(newDS.Tables[“Products”]);

SetBindings(newDS);

The first two lines declare a new DataSet and configure it by using the

ReadXmlSchema method based on the XSD schema that is defined in the

masterSchema.xsd file, which is in the Debug folder, in the bin folder of the

project directory.

Microsoft ADO.Net – Step by Step 365

The remaining three lines fill the new DataSet and then call the SetBindings

function, passing it to the DataSet object. SetBindings, which is in the Utility

Functions region of the code editor, binds the controls on the XML form to the

DataSet provided.

9. Press F5 to run the application.

10. Click Read Schema.

The application displays the data from the new DataSet in the form’s controls.

(Note that the navigation buttons will not work because they are specifically

bound to the dsMaster1 DataSet.)

11. Close the application.

The InferXmlSchema Method

The DataSet’s InferXmlSchema method derives a DataSet schema from the structure of

the XML data passed to it. As shown in Table 15-2, InferXmlSchema has the same input

sources as the ReadXmlSchema method we examined in the previous section.

Additionally, the InferXmlSchema method accepts an array of strings representing the

namespaces that should be ignored when generating the DataSet schema.

Table 15-2: InferXmlSchema Methods

Method Description

InferXmlSchema (stream, namespaces()) Reads a

schema

from the

specified

stream,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema (file, namespaces()) Reads a

schema

from the file

specified in

the file

parameter,

ignoring the

namespace

Microsoft ADO.Net – Step by Step 366

Table 15-2: InferXmlSchema Methods

Method Description

s identified

in the

namespace

s string

array

InferXmlSchema (textReader, namespaces()) Reads a

schema

from the

specified

textReader,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema (XmlReader, namespaces()) Reads a

schema

from the

specified

XmlReader,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema follows a fixed set of rules when generating a DataSet schema:

§ If the root element in the XML has no attributes and no child elements

that would otherwise be inferred as columns, it is inferred as a DataSet.

Otherwise, the root element is inferred as a table.

§ Elements that have attributes are inferred as tables.

§ Elements that have child elements are inferred as tables.

§ Elements that repeat are inferred as a single table.

§ Attributes are inferred as columns.

§ Elements that have no attributes or child elements and do not repeat are

inferred as columns.

§ If elements that are inferred as tables are nested within other elements

also inferred as tables, a DataRelation is created between the two tables.

A new, primary key column named “TableName_Id” is added to both

tables and used by the DataRelation. A ForeignKeyConstraint is created

between the two tables by using the “TableName_Id” column as the

foreign key.

§ If elements that are inferred as tables contain text but have no child

elements, a new column named “TableName_Text” is created for the text

of each of the elements. If an element is inferred as a table and has text

but also has child elements, the text is ignored.

Note Only nested (hierarchical) data will result in the creation of a

DataRelation. By default, the XML that is created by the DataSet’s

WriteXml method doesn’t create nested data, so a round-trip won’t

result in the same DataSet schema. As we’ll see, however, this

can be controlled by setting the Nested property of the

DataRelation object.

Microsoft ADO.Net – Step by Step 367

Infer the Schema of an XML Document

Visual Basic .NET

1. In the code editor, select btnInferSchema in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim newDS As New System.Data.DataSet()

4. Dim nsStr() As String

5.

6. newDS.InferXmlSchema(“dataOnly.xml”, nsStr)

7.

8. Me.daCategories.Fill(newDS.Tables(“Categories”))

9. Me.daProducts.Fill(newDS.Tables(“Products”))

10. newDS.Relations.Add(“CategoriesProducts”, _

11. newDS.Tables(“Categories”).Columns(“CategoryID”), _

newDS.Tables(“Products”).Columns(“CategoryID”))

The first two lines declare DataSet and String array variables, while the third

line passes them to the InferXmlSchema method. The remaining code adds a

new DataRelation to the new DataSet, fills it, and then calls the SetBindings

utility function that binds the XML form controls to the DataSet.

12. Press F5 to run the application.

13. Click Infer Schema.

The application displays the data in the form controls.

14. Close the application.

Visual C# .NET

1. In the form designer, double-click Infer Schema.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. System.Data.DataSet newDS = new System.Data.DataSet();

4. string[] nsStr = {};

5.

6. newDS.InferXmlSchema(“dataonly.xml”, nsStr);

Microsoft ADO.Net – Step by Step 368

7.

8. newDS.Relations.Add(“CategoriesProducts”,

9. newDS.Tables[“Categories”].Columns[“CategoryID”],

10. newDS.Tables[“Products”].Columns[“CategoryID”]);

11. this.daCategories.Fill(newDS.Tables[“Categories”]);

12. this.daProducts.Fill(newDS.Tables[“Products”]);

SetBindings(newDS);

The first two lines declare DataSet and String array variables, while the third

line passes them to the InferXmlSchema method. The remaining code adds a

new DataRelation to the new DataSet, fills it, and then calls the SetBindings

utility function that binds the XML form controls to the DataSet.

13. Press F5 to run the application.

14. Click Infer Schema.

The application displays the data in the form controls.

15. Close the application.

The ReadXml Method

The DataSet’s ReadXml method reads XML data into a DataSet. Optionally, it may also

create or modify the DataSet schema. As shown in Table 15-3, the ReadXml method

supports the same input sources as the other DataSet XML methods we’ve examined.

Table 15-3: ReadXml Methods

Method Description

ReadXml(Stream) Reads an

XML

schema and

data to the

specified

stream

ReadXml(String) Reads an

XML

schema and

data to the

file specified

in the string

parameter

ReadXml(TextReader) Reads an

Microsoft ADO.Net – Step by Step 369

Table 15-3: ReadXml Methods

Method Description

XML

schema and

data to the

specified

TextReader

ReadXml(XmlReader) Reads an

XML

schema and

data to the

specified

XmlReader

ReadXml(Stream, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

stream, as

determined

by the

XmlReadMo

de

ReadXml(String, XmlReadMode) Reads an

XML

schema,

data, or both

to the file

specified in

the string

parameter,

as

determined

by the

XmlReadMo

de

ReadXml(TextReader, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

TextReader,

as

determined

by the

XmlReadMo

de

ReadXml(XmlReader, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

XmlReader,

as

Microsoft ADO.Net – Step by Step 370

Table 15-3: ReadXml Methods

Method Description

determined

by the

XmlReadMo

de

The ReadXml method exposes an optional XmlReadMode parameter that determines

how the XML is interpreted. The possible values for XmlReadMode are shown in Table

15-4.

Table 15-4: ReadXMLMode Values

Value Description

Auto Chooses a

ReadMode

based on

the contents

of the XML

ReadSchema Reads an

inline

schema and

then loads

the data,

adding

DataTables

as

necessary

IgnoreSchema Loads data

into an

existing

DataSet,

ignoring any

schema

information

in the XML

InferSchema Infers a

DataSet

schema to

the XML,

ignoring any

inline

schema

information

DiffGram Reads

DiffGram

information

into an

existing

DataSet

schema

Fragment Adds XML

fragments

that match

the existing

DataSet

Microsoft ADO.Net – Step by Step 371

Table 15-4: ReadXMLMode Values

Value Description

schema to

the DataSet

and ignores

those that

do not

Unless the ReadXml method is passed an XmlReadMode parameter of DiffGram, it does

not merge the data that it reads with existing rows in the DataSet. If a row is read with

the same primary key as an existing row, the method will throw an exception.

A DiffGram is an XML format that encapsulates the current and original versions of an

element, along with any DataRow errors. The nominal structure of a DiffGram is shown

here:

<diffgr:diffgram

xmlns:msdata=”urn:schemas-microsoft -com:xml-msdata”

xmlns:diffgr=”urn:schemas-microsoft-com:xml-diffgram-v1″

xmlns:xsd=”http://www.w3.org/2001/XMLSchema”&gt;

<ElementName>

</ElementName>

<diffgr:before>

</diffgr:before>

<diffgr:errors>

</diffgr:errors>

</diffgr:diffgram>

In the real DiffGram, the first section (shown as <ElementName> </ElementName> in

the example) will have the name of the complexType defining the DataRow. The section

contains the current version of the contents of the DataRow. The <diffgr:before> section

contains the original version, while the <diffgr:errors> section contains error information

for the row.

In order for DiffGram to be passed as the XmlReadMode parameter, the data must be in

DiffGram format. If you need to merge XML that is written in standard XML format with

existing data, create a new DataSet and then call the DataSet.Merge method to merge

the two sets of data.

Load XML Data Using ReadXml

Visual Basic .NET

1. In the code editor, select btnReadData in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim newDS As New System.Data.DataSet()

4. Dim nsStr() As String

5.

6. newDS.ReadXml(“data.xml”, XmlReadMode.ReadSchema)

SetBindings(newDS)

Microsoft ADO.Net – Step by Step 372

The data.xml file contains an inline schema definition, so by passing the

ReadSchema XmlReadMode parameter to the ReadXml method, the code

instructs the DataSet to first create the DataSet schema and then load the

data.

7. Press F5 to run the application.

8. Click Read Data.

The application displays the data retrieved from the file.

9. Close the application.

Visual C# .NET

1. In the form designer, double-click Read Data.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. System.Data.DataSet newDS = new System.Data.DataSet();

4. string[] nsStr ={};

5.

6. newDS.ReadXml(“data.xml”, XmlReadMode.ReadSchema);

SetBindings(newDS);

The data.xml file contains an inline schema definition, so by passing the

ReadSchema XmlReadMode parameter to the ReadXml method, the code

instructs the DataSet to first create the DataSet schema and then load the

data.

7. Press F5 to run the application.

8. Click Read Data.

The application displays the data retrieved from the file.

Microsoft ADO.Net – Step by Step 373

9. Close the application.

The WriteXmlSchema Method

As might be expected, the WriteXmlSchema method writes the schema of the DataSet,

including tables, columns, and constraints, to the specified output. The versions of the

method, which accept the same output parameters as the other XML methods, are

shown in Table 15-5.

Table 15-5: WriteXmlSchema Methods

Method Description

WriteXml(stream) Writes an

XML

schema to

the specified

stream

WriteXml(string) Writes an

XML

schema to

the files

specified in

the string

parameter

WriteXml(TextReader) Writes an

XML

schema to

the specified

TextReader

WriteXml(XmlReader) Writes an

XML

schema to

the specified

XmlReader

Create an XML Schema Using WriteXmlSchema

Visual Basic .NET

1. In the code editor, select btnWriteSchema in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

Microsoft ADO.Net – Step by Step 374

2. Add the following lines to the event handler:

3. Me.dsMaster1.WriteXmlSchema(“testSchema.xsd”)

Messagebox.Show(“Finished”, “WriteXmlSchema”)

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

4. Press F5 to run the application.

5. Click Write Schema.

The application displays a message box after the file has been written.

6. Close the message box, and then close the application.

7. Open Microsoft Windows Explorer, navigate to the XML/bin project

directory, right-click the testSchema.xsd file, and then select Open

with Notepad.

Windows displays the schema file.

8. Close Microsoft Notepad, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Schema.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.dsMaster1.WriteXmlSchema(“testSchema.xsd”);

MessageBox.Show(“Finished”, “WriteXmlSchema”);

Microsoft ADO.Net – Step by Step 375

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

4. Press F5 to run the application.

5. Click Write Schema.

The application displays a message box after the file has been written.

6. Close the message box, and then close the application.

7. Open Microsoft Windows Explorer, navigate to the XML/bin/Debug

project directory, right-click the testSchema.xsd file, and then select

Open with Notepad.

Windows displays the schema file.

8. Close Microsoft Notepad, and return to Visual Studio.

The WriteXml Method

Like the ReadXml method, the DataSet’s WriteXml method writes XML data and,

optionally, DataSet schema information, to a specified output, as shown in Table 15-6.

As we’ll see in the following section, the structure of the XML resulting from the WriteXml

method is controlled by DataSet property settings.

Table 15-6: WriteXml Methods

Method Description

WriteXml(Stream) Writes an

XML

schema and

data to the

Microsoft ADO.Net – Step by Step 376

Table 15-6: WriteXml Methods

Method Description

specified

stream

WriteXml(String) Writes an

XML

schema and

data to the

file specified

in the string

parameter

WriteXml(TextReader) Writes an

XML

schema and

data to the

specified

TextReader

WriteXml(XmlReader) Writes an

XML

schema and

data to the

specified

XmlReader

WriteXml(Stream, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

stream, as

determined

by the

XmlWriteMo

de

WriteXml(String, XmlWriteMode) Writes an

XML

schema,

data, or both

to the file

specified in

the string

parameter,

as

determined

by the

XmlWriteMo

de

WriteXml(TextReader, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

TextReader,

as

determined

by the

Microsoft ADO.Net – Step by Step 377

Table 15-6: WriteXml Methods

Method Description

XmlWriteMo

de

WriteXml(XmlReader, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

XmlReader,

as

determined

by the

XmlWriteMo

de

The valid XmlWriteMode parameters are shown in Table 15-7. The DiffGram parameter

causes the output to be written in DiffGram format. If no XmlWriteMode parameter is

specified, WriteSchema is assumed.

Table 15-7: WriteXMLMode Values

Value Description

IgnoreSchema Writes the

data without

a schema

WriteSchema Writes the

data with an

inline

schema

DiffGram Writes the

entire

DataSet in

DiffGram

format

Write Data to a File in XML Format

Visual Basic .NET

1. In the code editor, select btnWriteData in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5.

6. Me.dsMaster1.WriteXml(“newData.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “WriteXml”)

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

7. Press F5 to run the application.

8. Click Write Data.

The application displays a message box after the file has been written.

Microsoft ADO.Net – Step by Step 378

9. Close the message box, and then close the application.

10. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the data.xml file.

The XML file opens in Microsoft Internet Explorer.

11. Close Internet Explorer, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Data.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.daCategories.Fill(this.dsMaster1.Categories);

4. this.daProducts.Fill(this.dsMaster1.Products);

5.

6. this.dsMaster1.WriteXml(“newData.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “WriteXml”);

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

7. Press F5 to run the application.

8. Click Write Data.

The application displays a message box after the file has been written.

Microsoft ADO.Net – Step by Step 379

9. Close the message box, and then close the application.

10. Open Windows Explorer, navigate to the XML/bin/Debug project

directory, and double-click the data.xml file.

The XML file opens in Microsoft Internet Explorer.

11. Close Internet Explorer, and return to Visual Studio.

Controlling How the XML Is Written

By default, the WriteXml method generates XML that is formatted according to the

nominal structure we examined in Chapter 14, with DataTables structured as

complexTypes and DataColumns as elements within them.

This isn’t necessarily what you want the output to be. If, for example, you want to read

the data back into a DataSet, ADO.NET won’t create relationships correctly unless the

schema is present, which is an unnecessary overhead in many situations, or the related

data is nested hierarchically in the XML.

In other situations, you may need to control whether individual columns are written as

elements, attributes, or simple text, or even prevent some columns from being written at

all. This might be the case, for example, if you’re interchanging data with another

application.

Microsoft ADO.Net – Step by Step 380

Using the Nested Property of the DataRelation

By convention, XML data is usually represented hierarchically—related rows are nested

inside their parent rows.

The Nested property of DataRelation causes the XML to be written so that the child rows

are nested within the parent rows.

Write Related Data Hierarchically

Visual Basic .NET

1. In the code editor, select btnWriteNested in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5.

6. Me.dsMaster1.Relations(“CategoriesProducts”).Nested = True

7. Me.dsMaster1.WriteXml(“nestedData.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “WriteXml Nested”)

The code sets the Nested property to True before writing it to the

nestData.xml file.

8. Press F5 to run the application.

9. Click Write Nested.

The application displays a message box after the file has been written.

10. Close the message box, and then close the application.

11. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the nestedData.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 381

12. Close Internet Explorer, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Nested.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.daCategories.Fill(this.dsMaster1.Categories);

4. this.daProducts.Fill(this.dsMaster1.Products);

5.

6. this.dsMaster1.Relations[“CategoriesProducts”].Nested = true;

7. this.dsMaster1.WriteXml(“nestedData.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “WriteXml Nested”);

The code sets the Nested property to true before writing it to the nestData.xml

file.

8. Press F5 to run the application.

9. Click Write Nested.

The application displays a message box after the file has been written.

10. Close the message box, and then close the application.

11. Open Windows Explorer, navigate to the XML/bin/Debug project

directory, and double-click the nestedData.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 382

12. Close Internet Explorer, and return to Visual Studio.

Using the ColumnMapping Property of the DataColumn

The DataColumn’s ColumnMapping property controls how the column will be written by

the WriteXml method. The possible values for the ColumnMapping property are shown in

Table 15-8.

Element, the default value, writes the column as a nested element within the

complexType representing the DataTable, while Attribute writes the column as one of its

attributes. These two values can be freely mixed within any given DataTable. The

Hidden value prevents the column from being written at all.

SimpleContent, which writes the column as a simple text value, cannot be combined with

columns that are written as elements or attributes, nor can it be used if the Nested

property of a DataRelation referencing the table has its Nested property set to true.

Table 15-8: Column MappingType Values

Value Description

Element The column

is written as

an XML

element

Attribute The column

is written as

an XML

attribute

SimpleContent The

contents of

the column

are written

as text

Hidden The column

will not be

included in

the XML

output

Write Columns as Attributes

Visual Basic .NET

1. In the code editor, select btnAttributes in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

Microsoft ADO.Net – Step by Step 383

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4.

5. With Me.dsMaster1.Categories

6. .Columns(“CategoryID”).ColumnMapping =

MappingType.Attribute

7. .Columns(“CategoryName”).ColumnMapping =

MappingType.Attribute

8. .Columns(“Description”).ColumnMapping =

MappingType.Attribute

9. End With

10. Me.dsMaster1.WriteXml(“attributes.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “Write Attributes”)

11. Press F5 to run the application.

12. Click Attributes.

The application displays a message box after the file has been written.

13. Close the message box, and then close the application.

14. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the attributes.xml file.

The XML file opens in Internet Explorer.

15. Close Internet Explorer, and return to Visual Studio.

Microsoft ADO.Net – Step by Step 384

Visual C# .NET

1. In the form designer, double-click Attributes.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. System.Data.DataTable cat = this.dsMaster1.Categories;

4. this.daCategories.Fill(cat);

5.

6. cat.Columns[“CategoryID”].ColumnMapping =

MappingType.Attribute;

7. cat.Columns[“CategoryName”].ColumnMapping =

MappingType.Attribute;

8. cat.Columns[“Description”].ColumnMapping =

MappingType.Attribute;

9.

10. this.dsMaster1.WriteXml(“attributes.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “Write Attributes”);

11. Press F5 to run the application.

12. Click Attributes.

The application displays a message box after the file has been written.

13. Close the message box, and then close the application.

14. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the attributes.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 385

15. Close Internet Explorer, and return to Visual Studio.

The XmlDataDocument Object

Although the relational data model is efficient, there are times when it is convenient to

manipulate a set of data by using the tools provided by XML—the Extensible Stylesheet

Language (XSL), XSLT, and XPath.

The .NET Framework’s XmlDataDocument makes that possible. The XmlDataDocument

allows XML-structured data to be manipulated as a DataSet. It doesn’t create a new set

of data, but rather it creates a DataSet that references all or part of the XML data.

Because there’s only one set of data, changes made in one view will automatically be

reflected in the other view, and of course, memory resources are conserved because

only one copy of the data is being maintained.

Depending on the initial source of your data, you can create an XmlDataDocument

based on the schema and contents of a DataSet, or you can create a DataSet based on

the contents of an XmlDataDocument. In either case, changes made to the data stored

in one view will be reflected in the other view.

To create an XmlDataDocument based on an existing DataSet, pass the DataSet to the

XmlDataDocument constructor:

myXDD = New XmlDataDocument(myDS)

If the DataSet schema has not been established prior to creating the XmlDataDocument,

both schemas must be established manually—schema changes made to one object will

not be propagated to the other object.

Alternatively, to begin with an XML document and create a DataSet, you can use the

default XmlDataDocument constructor and then reference its DataSet property:

myXDD = New XmlDataDocument()

myDS = myXDD.DataSet

If you use this method, you must create the DataSet schema manually by adding objects

to the DataSet’s Tables collection and the DataTable’s Columns collection. In order for

the data in the XmlDataDocument to be available through the DataSet, the DataTable

and DataColumn names must match those in the XmlDataDocument. The matching is

case-sensitive.

The second method, while it requires slightly more code, provides a mecha-nism for

creating a partial relational view of the XML data. There is no requirement to duplicate

the entire XML schema in the DataSet. Any DataTables or DataColumns that are not in

the DataSet will simply be ignored during DataSet operations.

Microsoft ADO.Net – Step by Step 386

Data can be loaded into either document at any time, before or after synchro-nization.

Any data changes made to one object, including adding, deleting, or changing values,

will automatically be reflected in the other object.

Create a Synchronized XML View of a DataSet

Visual Basic .NET

1. In the code editor, select btnDocument in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim myXDD As System.Xml.XmlDataDocument

4.

5. myXDD = New System.Xml.XmlDataDocument(Me.dsMaster1)

6. myXDD.Load(“dataOnly.xml”)

7.

SetBindings(Me.dsMaster1)

The first line declares the XmlDataDocument variable, while the second line

synchronizes it with the dsMaster1 DataSet. The third line loads data into the

XmlDataDocument.

The final line binds the form controls to dsMaster1. Because the DataSet has

been synchronized with the myXDD XmlDataDocument, the data loaded into

myXDD will be available in dsMaster1.

8. Press F5 to run the application.

9. Click Documents.

The application displays the data in the form.

10. Close the application.

Visual C# .NET

1. In the form designer, double-click Document.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. System.Xml.XmlDataDocument myXDD;

4.

5. myXDD = new System.Xml.XmlDataDocument(this.dsMaster1);

Microsoft ADO.Net – Step by Step 387

6. myXDD.Load(“dataOnly.xml”);

7.

SetBindings(this.dsMaster1);

The first line declares the XmlDataDocument variable, while the second line

synchronizes it with the dsMaster1 DataSet. The third line loads data into the

XmlDataDocument.

The final line binds the form controls to dsMaster1. Because the DataSet has

been synchronized with the myXDD XmlDataDocument, the data loaded into

myXDD will be available in dsMaster1.

8. Press F5 to run the application.

9. Click Documents.

The application displays the data in the form.

10. Close the application.

Chapter 15 Quick Reference

To Do this

Retrieve an

XML

schema

from a

DataSet

Use the DataSet’s GetXmlSchema method:

XmlSchemaString = myDataSet.GetXmlSchema()

Retrieve

data from a

DataSet in

XML format

Use the DataSet’s GetXml method:

XmlDataString = myDataSet.GetXml()

Create a

DataSet

schema

from an

XML

schema

Use the DataSet’s ReadXmlSchema method:

myDataSet.ReadXmlSchema(“schema.xsd”)

Infer the

schema of

an XML

document

Use the DataSet’s InferXmlSchema method:

myDataSet.InferXmlSchema(“data.xml”, string[])

Microsoft ADO.Net – Step by Step 388

To Do this

Load XML

data into a

DataSet

Use the DataSet’s ReadXml method:

myDataSet.ReadXml(“data.xml”)

Create an

XML

schema

from a

DataSet

Use the DataSet’s WriteXmlSchema method:

myDataSet.WriteXmlSchema(“schema.xsd”)

Write data

to an XML

document

Use the DataSet’s WriteXml method:

myDataSet.WriteXml(“data.xml”)

Create a

synchroniz

ed XML

view of a

DataSet

Create an instance of an XmlDataDocument that

references the DataSet:

Dim myXDD As System.Xml.XmlDataDocument

myXDD = New

System.Xml.XmlDataDocument(myDataSet)

Chapter 16: Using ADO in the .NET Framework

Overview

In this chapter, you’ll learn how to:

§ Establish a reference to the ADO and ADOX COM libraries

§ Create an ADO connection

§ Retrieve data from an ADO Recordset

§ Update an ADO Recordset

§ Create a database using ADOX

§ Add a table to a database using ADOX

In the previous two chapters, we examined using XML data with Microsoft ADO.NET

objects. In this chapter, we’ll look at the interface to another type of data, legacy data

objects created by using previous versions of ADO.

We’ll also examine the ADOX library, which provides the ability to create database

objects under programmatic control. This functionality is not available in ADO.NET,

although you can execute DDL statements such as CREATE TABLE on servers that

support them.

Understanding COM Interoperability

Maintaining interoperability with COM components was one of the design goals of the

Microsoft .NET Framework, and this achievement extends to previous versions of ADO.

By using the COM Interop functions provided by the .NET Framework, you can gain

access to all the objects, methods, and events that are exposed by any COM object

simply by establishing a reference to it. This includes previous versions of ADO and

COM objects that you’ve developed using them.

After the reference has been established, the COM objects behave just as though they

were .NET Framework classes. What happens behind the scenes, of course, is more

complicated. When a reference to any COM object, including ADO or ADOX, is declared,

Microsoft ADO.Net – Step by Step 389

the .NET Framework creates an interop assembly that handles communication between

the .NET Framework and COM.

The interop assembly handles a number of tasks, but the most important is data type

marshaling. Table 16-1 shows the type conversion performed by the interop assembly

for standard COM value types.

Table 16-1: COM Data Type Marshaling

Com Data Type .NET

Framew

ork Type

bool Int32

char, small SByte

Short Int16

long, int Int32

hyper Int64

unsigned char, byte Byte

wchar_t, unsigned short UInt16

unsigned long, unsigned int UInt32

unsigned hyper UInt64

float Single

double Double

VARIANT_BOOL Boolean

void * IntPtr

HRESULT Int16 or

IntPtr

SCODE Int32

BSTR String

LPSTR String

LPWSTR String

VARIANT Object

DECIMAL Decimal

DATE DateTime

GUID Guid

CURRENCY Decimal

IUnknown * Object

IDispatch * Object

SAFEARRAY(type) type[]

Microsoft ADO.Net – Step by Step 390

Using ADO in the .NET Framework

In addition to the generic COM interoperability and data type marshaling provided by the

.NET Framework for all COM objects, the .NET Framework provides specific support for

the ADO and ADOX libraries, and COM objects built using them.

This additional support includes data marshaling for core ADO data types. The .NET

Framework equivalents for core ADO types are shown in Table 16-2. Of course, after a

reference to ADO is established, complex types such as Recordset and ADO Connection

become available through the ADO component.

Table 16-2: ADO Data Type Marshaling

ADO Data Type .NET Framework

Type

adEmpty null

adBoolean Int16

adTinyInt SByte

adSmallInt Int16

adInteger Int32

adBigInt Int64

adUnsignedTinyInt promoted to Int16

adUnsignedSmallInt promoted to Int32

adUnsignedInt promoted to Int64

adUnsignedBigInt promoted to

Decimal

adSingle Single

adDouble Double

adCurrency Decimal

adDecimal Decimal

adNumeric Decimal

adDate DateTime

adDBDate DateTime

adDBTime DateTime

adDBTimeStamp DateTime

adFileTime DateTime

adGUID Guid

adError ExternalException

adIUnknown object

adIDispatch object

adVariant object

adPropVariant object

adBinary byte[]

Microsoft ADO.Net – Step by Step 391

Table 16-2: ADO Data Type Marshaling

ADO Data Type .NET Framework

Type

adChar string

adWChar string

adBSTR string

adChapter not supported

adUserDefined not supported

adVarNumeric not supported

Establishing a Reference to ADO

The first step in using a previous version of ADO, or a COM component that references a

previous version, is to set a reference to the component. There are several methods for

exposing the ADO component, but the most convenient is to simply add the reference

within Microsoft Visual Studio .NET.

Add References to the ADO and ADOX Libraries

1. In Visual Studio, open the ADOInterop project from the Start page or

the File menu.

2. In the Solution Explorer, double-click ADOInterop.vb (or

ADOInterop.cs if you’re using C#).

Visual Studio displays the form in the form designer.

3. On the Project menu, select Add Reference.

Visual Studio opens the Add Reference dialog box.

Microsoft ADO.Net – Step by Step 392

4. On the COM tab, select the component named Microsoft ActiveX Data

Objects 2.1 Library, and then click Select.

5. Select the component named Microsoft ADO Ext. 2.7 for DDL and

Security, and then click Select.

6. Click OK.

Visual Studio closes the dialog box and adds the references to the project.

7. In the Solution Explorer, expand the references node.

Microsoft ADO.Net – Step by Step 393

Visual Studio displays the new references.

Creating ADO Objects

After the references to the ADO components have been established, ADO objects can

be created and their properties set just like any object exposed by the .NET Framework

class library.

Like ADO.NET, ADO uses a Connection object to represent a unique session with a data

source. The most important property of an ADO connection, just like an ADO.NET

connection, is the ConnectionString, which establishes the Data Provider, the database

information, and, if appropriate, the user information.

Create an ADO Connection

Visual Basic .NET

1. Press F7 to open the code editor.

2. Add the following procedure, specifying the complete path for the

dsStr text value:

3. Private Function create_connection() As ADODB.Connection

4. Dim dsStr As String

5. Dim dsCn As String

6. Dim cn As New ADODB.Connection()

7.

8. dsStr = “<<Specify the path to the Access nwind sample db

here>>”

9. dsCn = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” & _

10. dsStr & “;”

11. cn.ConnectionString = dsCn

Microsoft ADO.Net – Step by Step 394

12.

13. Return cn

14.

End Function

Visual C# .NET

1. Press F7 to open the code editor.

2. Add the following procedure, specifying the complete path for the

dsStr text value:

3. private ADODB.Connection create_connection()

4. {

5. string dsStr;

6. string dsCn;

7.

8. ADODB.Connection cn = new ADODB.Connection();

9. dsStr = “<<Specify the path to the Access nwind sample db

here>>”;

10. dsCn = “Provider=Microsoft.Jet.OLEDB.4.0;Data

Source=” +

11. dsStr + “;”;

12. cn.ConnectionString = dsCn;

13.

14. return cn;

}

This function simply creates an ADO connection and returns it to the caller.

We’ll use the function to simplify creating connections in later exercises.

(ConnectionStrings can be tedious to type.)

In addition to support for ADO data types, the OleDbDataAdapter provides direct support

for ADO Recordsets by exposing the Fill method that accepts an ADO Recordset as a

parameter. There are two versions of the method, as shown in Table 16-3.

Table 16-3: OleDbDataAdapter Fill Methods

Method Description

Fill(DataTable, Recordset) Adds or

refreshes

rows in the

DataTable

to match

those in the

Recordset

Fill(DataSet, Recordset, Adds or

refreshes

rows in the

DataTable

in

DataTable) the specified

DataSet to

match those

in the

Recordset

If the DataTable passed to the Fill method doesn’t exist in the DataSet, it is created

based on the schema of the ADO Recordset. Unless primary key information exists, the

Microsoft ADO.Net – Step by Step 395

rows in the ADO Recordset will simply be added to the DataTable. If primary key

information does exist, matching rows in the ADO Recordset will be merged with those in

the DataTable.

Retrieve Data from an ADO Recordset

Visual Basic .NET

1. In the code editor, select btnOpen in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim rs As New ADODB.Recordset()

4. Dim cnADO As ADODB.Connection

5. Dim daTemp As New OleDb.OleDbDataAdapter()

6.

7. cnADO = create_connection()

8. cnADO.Open()

9.

10. rs.Open(“Select * From CategoriesByName”, cnADO)

11. daTemp.Fill(Me.dsCategories1.Categories, rs)

12. cnADO.Close()

13.

SetBindings(Me.dsCategories1)

The first three lines declare an ADO Recordset, an ADO Connection, and an

OleDbDataAdapter. The next two lines call the create_connection function

that we created in the previous exercise to create the ADO Connection object,

and then open the connection.

The next three lines open the ADO Recordset, load the rows into the

DataAdapter, and then close the ADO Recordset, while the final line calls a

function (in the Utility Functions region of the code editor) that binds the

form’s text boxes to the specified DataSet.

14. Press F5 to run the application.

15. Click Open ADO.

The application loads the data from ADO and displays it in the form’s text

boxes.

16. Close the application.

Microsoft ADO.Net – Step by Step 396

Visual C# .NET

1. In the form designer, double-click Open ADO.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. ADODB.Recordset rs = new ADODB.Recordset();

4. ADODB.Connection cnADO;

5.

6. System.Data.OleDb.OleDbDataAdapter daTemp =

7. new System.Data.OleDb.OleDbDataAdapter();

8. cnADO = create_connection();

9.

10. cnADO.Open(cnADO.ConnectionString, “”, “”, -1);

11. rs.Open(“Select * From CategoriesByName”,

12. cnADO, ADODB.CursorTypeEnum.adOpenForwardOnly,

13. ADODB.LockTypeEnum.adLockOptimistic, 1);

14. daTemp.Fill(Me.dsCategories1.Categories, rs);

15.

16. cnADO.Close();

17. SetBindings(Me.dsCategories1);

The first three lines declare an ADO Recordset, an ADO Connection, and an

OleDbDataAdapter. The next two lines call the create_connection function

that we created in the previous exercise to create the ADO Connection object,

and then open the connection.

The next three lines open the ADO Recordset, load the rows into the

DataAdapter, and then close the ADO Recordset, while the final line calls a

function (in the Utility Functions region of the code editor) that binds the

form’s text boxes to the specified DataSet.

18. Press F5 to run the application.

19. Click Open ADO.

The application loads the data from ADO and displays it in the form’s text

boxes.

20. Close the application.

The OleDbDataAdapter’s Fill method provides a convenient mechanism for loading data

from an ADO Recordset into a .NET Framework DataTable, but unfortunately, the

communication is one-way. The .NET Framework doesn’t provide a direct method for

updating an ADO Recordset based on ADO.NET data.

Microsoft ADO.Net – Step by Step 397

Fortunately, it isn’t difficult to update an ADO data source from within the .NET

Framework—simply copy the data values from the appropriate source and use the

intrinsic ADO functions to do the update.

Update an ADO Recordset

Visual Basic .NET

1. In the code editor, select btnUpdate in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim rsADO As New ADODB.Recordset()

4. Dim cnADO As ADODB.Connection

5.

6. cnADO = create_connection()

7. cnADO.Open()

8. rsADO.ActiveConnection = cnADO

9. rsADO.Open(“Select * From CategoriesByName”, cnADO, _

10. ADODB.CursorTypeEnum.adOpenDynamic, _

11. ADODB.LockTypeEnum.adLockOptimistic)

12.

13. rsADO.AddNew()

14. rsADO.Fields(“CategoryName”).Value = “Test”

15. rsADO.Fields(“Description”).Value = “Description”

16. rsADO.Update()

17.

18. rsADO.Close()

19. cnADO.Close()

MessageBox.Show(“Finished”, “Update”)

As always, the first few lines declare some local values. The next five lines

create a connection and an ADO Recordset. The next four lines use ADO’s

AddNew and Update methods to create a new row and set its values. Finally,

the Recordset and ADO Connection are closed, and a message box is

displayed.

20. Press F5 to run the application.

21. Click Update ADO.

The application adds the row to the DataTable, and then displays amessage

box telling you that the new row has been added.

22. Close the message box.

23. Click Open ADO to load the data into the form, and then click the

Last (“>|”) button to display the last row.

The application displays the new row.

Microsoft ADO.Net – Step by Step 398

24. Close the application.

25. If you have Microsoft Access, open the nwind database and confirm

that the row has been added.

Visual C# .NET

1. In the form designer, double-click Update ADO.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. ADODB.Recordset rsADO = new ADODB.Recordset();

4. ADODB.Connection cnADO;

5.

6. cnADO = create_connection();

7. cnADO.Open(cnADO.ConnectionString,””,””,-1);

8.

9. rsADO.ActiveConnection = cnADO;

10. rsADO.Open(“Select * From CategoriesByName”, cnADO,

11. ADODB.CursorTypeEnum.adOpenDynamic,

12. ADODB.LockTypeEnum.adLockOptimistic, -1);

13.

14. rsADO.AddNew(Type.Missing, Type.Missing);

15. rsADO.Fields[1].Value = “Test”;

16. rsADO.Fields[2].Value = “Description”;

17. rsADO.Update(Type.Missing, Type.Missing);

18.

19. rsADO.Close();

20. cnADO.Close();

MessageBox.Show(“Finished”, “Update”);

Microsoft ADO.Net – Step by Step 399

As always, the first few lines declare some local values. The next five lines

create a connection and an ADO recordset. The next four lines use ADO’s

AddNew and Update methods to create a new row and set its values. Finally,

the recordset and ADO connection are closed, and a message box is

displayed.

21. Press F5 to run the application.

22. Click Update ADO.

The application adds the row to the DataTable, and then displays a message

box telling you that the new row has been added.

23. Close the message box.

24. Click Open ADO to load the data into the form, and then click the

Last (“>|”) button to display the last row.

The application displays the new row.

25. Close the application.

26. If you have Access, open the nwind database and confirm that the

row has been added.

Using ADOX in the .NET Framework

ADOX, more formally the “Microsoft ADO Extensions for DDL and Security,” exposes an

object model that allows data source objects to be created and manipulated.

The ADOX object model is shown in the following figure. Not all data sources support all

of the objects in the model; this is determined by the specific OleDb Data Provider.

Microsoft ADO.Net – Step by Step 400

The top-level object, Catalog, equates to a specific data source. This will almost always

be a database, but specific OleDb Data Providers might expose different objects. The

Groups and Users collections control access security for those data sources that

implement it.

The Tables object represents the tables within the database. Each table contains a

Columns collection, which represents individual fields in the table; an Indexes collection,

which represents physical indexes; and a Keys collection, which is used to define

unique, primary, and foreign keys.

The Procedures collection represents stored procedures on the data source, while the

Views collection represents Views or Queries. This model doesn’t always match the

object model of the data source. For example, Microsoft Jet (the underlying data source

for Access) represents both Views and Procedures as Query objects. When mapped to

an ADOX Catalog, any query that updates or inserts rows, along with any query that

contains parameters, is mapped to a Procedure object. Queries that consist solely of

SELECT statements are mapped to Views.

Creating Database Objects Using ADOX

As we’ve seen, ADOX provides a mechanism for creating data source objects

programmatically. ADO.NET doesn’t support this functionality. You can, of course,

execute a CREATE <object> SQL statement using an ADO.NET DataCommand, but

data definition syntax varies wildly between data sources, so it will often be more

convenient to use ADOX and let the OleDb Data Provider handle the operation.

The Catalog object supports a Create method that creates a new database, while the

Tables and Columns collections support Append methods that are used to create new

schema objects.

Create a Database Using ADOX

Visual Basic .NET

1. In the code editor, select btnMakeDB in the Control Name combo box,

and then select Click in the Method Name combo box.

Microsoft ADO.Net – Step by Step 401

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler, specifying the path to the

Sample DBs directory on your system where indicated:

3. Dim dsStr, dsCN As String

4. Dim cnADO As New ADODB.Connection()

5. Dim mdb As New ADOX.Catalog()

6.

7. dsStr = “<<specify the path to the Sample DBs directory>>” _

8. + “\test.mdb”

9. dsCN = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” &

dsStr & “;”

10. cnADO.ConnectionString = dsCN

11.

12. mdb.Create(dsCN)

13.

14. mdb.ActiveConnection.Close()

MessageBox.Show(“Finished”, “Make DB”)

15. Press F5 to run the application, and then click Make DB.

The application creates a Jet database named Test in the Sample DBs

directory and then displays a finished method.

16. Close the dialog box, and then close the application.

17. Verify that the new database has been added using Microsoft

Windows Explorer.

Microsoft ADO.Net – Step by Step 402

Visual C# .NET

1. In the form designer, double-click Make DB.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler, specifying the path to the

Sample DBs directory on your system where indicated:

3. string dsStr, dsCN;

4. ADODB.Connection cnADO = new ADODB.Connection();

5. ADOX.Catalog mdb = new ADOX.Catalog();

6.

7. dsStr = “<<specify the path to the Sample DBs directory>>” _

8. + “\\test.mdb”;

9. dsCN = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” +

dsStr + “;”;

10. cnADO.ConnectionString = dsCN;

11.

12. mdb.Create(dsCN);

13.

14. MessageBox.Show(“Finished”, “Make DB”);

15. Press F5 to run the application, and then click Make DB.

The application creates a Jet database named Test in the Sample DBs

directory and then displays a finished method.

Microsoft ADO.Net – Step by Step 403

16. Close the dialog box, and then close the application.

17. Verify that the new database has been added using Microsoft

Windows Explorer.

Add a Table to a Database Using ADOX

Visual Basic .NET

1. In the code editor, select btnMakeTable in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim cnADO As ADODB.Connection

4. Dim mdb As New ADOX.Catalog()

5. Dim dt As New ADOX.Table()

6.

7. cnADO = create_connection()

8. cnADO.Open()

9. mdb.ActiveConnection = cnADO

10.

11. With dt

12. .Name = “New Table”

13. .Columns.Append(“TableID”,

ADOX.DataTypeEnum.adWChar, 5)

14. .Columns.Append(“Value”,

ADOX.DataTypeEnum.adWChar, 20)

15. .Keys.Append(“PK_NewTable”,

ADOX.KeyTypeEnum.adKeyPrimary, _

Microsoft ADO.Net – Step by Step 404

16. “TableID”)

17. End With

18. mdb.Tables.Append(dt)

19.

20. mdb.ActiveConnection.Close()

MessageBox.Show(“Finished”, “Make Table”)

21. Press F5 to run the application, and then click Make Table.

The application adds the table to the nwind database and displays a message

box telling you that the new table has been added.

22. Close the message box, and then close the application.

23. If you have Access, open the nwind database and confirm that the

new table has been added.

Visual C# .NET

1. In the form designer, double-click Make Table.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. ADODB.Connection cnADO;

4. ADOX.Catalog mdb = new ADOX.Catalog();

5. ADOX.Table dt = new ADOX.Table();

6.

7. cnADO = create_connection();

8. cnADO.Open(cnADO.ConnectionString, “”, “”, -1);

9. mdb.ActiveConnection = cnADO;

10.

11. dt.Name = “New Table”;

12. dt.Columns.Append(“TableID”,

ADOX.DataTypeEnum.adWChar, 5);

Microsoft ADO.Net – Step by Step 405

13. dt.Columns.Append(“Value”,

ADOX.DataTypeEnum.adWChar, 20);

14. dt.Keys.Append(“PK_NewTable”,

ADOX.KeyTypeEnum.adKeyPrimary, “TableID”);

15. mdb. Tables.Append(dt);

16.

17. MessageBox.Show(“Finished”, “Make Table”);

18. Press F5 to run the application, and then click Make Table.

The application adds the table to the nwind database and displays a message

box telling you that the new table has been added.

19. Close the message box, and then close the application.

20. If you have Access, open the nwind database and confirm that the

new table has been added.

Chapter 16 Quick Reference

To Do this

Establish a reference to an

ADO or ADOX library

On the Projects menu, choose Add Reference,

select the library from the COM tab of the Add

Reference dialog box, click Select, and then

click OK

Create an ADO object Reference the ADO COM library, and then use

the usual .NET Framework object creation

commands

Load data from an ADO

Recordset to a ADO.NET

DataSet

Use the DataAdapter’s Fill method:

myDataAdapter.Fill(DataTable,

ADORecordset)

Update an ADO Recordset Open the ADO Connection and ADO Recordset,

and then use the AddNew or Update methods

Microsoft ADO.Net – Step by Step 406

To Do this

Create a database using

ADOX

Use the ADOX Catalog object’s Create method:

adoxCatalog.Create

Add a table to a database

using ADOX

Use the Append method of the ADO Catalog

object’s Tables collection:

adoxCatalog.Tables.Append(adoxTable)

List of Tables

Chapter 2: Creating Connections

Table 2-1: Connection Constructors

Table 2-2: OleDbConnection Properties

Table 2-3: SqlConnection Properties

Table 2-4: Connection Methods

Table 2-5: Connection States

Chapter 3: Data Commands and the DataReader

Table 3-1: Command Constructors

Table 3-2: Data Command Properties

Table 3-3: CommandType Values

Table 3-4: UpdatedRowSource Values

Table 3-5: Parameters Collection Methods

Table 3-6: Command Methods

Table 3-7: CommandBehavior Values

Table 3-8: DataReader Properties

Table 3-9: DataReader Methods

Table 3-10: GetType Methods

Chapter 4: The DataAdapter

Table 4-1: DataAdapter Properties

Table 4-2: MissingMappingAction Values

Table 4-3: MissingSchemaAction Values

Table 4-4: DbDataAdapter Fill Methods

Table 4-5: OleDbDataAdapter Fill Methods

Table 4-6: DbDataAdapter Update Methods

Table 4-7: RowUpdatingEventArgs Properties

Chapter 5: Transaction Processing in ADO.NET

Table 5-1: Connection BeginTransaction Methods

Table 5-2: Additional SQL BeginTransaction Methods

Table 5-3: Isolation Levels

Table 5-4: Transaction BeginTransaction Methods

Chapter 6: The DataSet

Table 6-1: DataSet Constructors

Table 6-2: DataSet Properties

Table 6-3: Primary DataSet Methods

Chapter 7: The DataTable

Table 7-1: DataTable Constructors

Table 7-2: DataSet Add Table Methods

Table 7-3: DataTable Properties

Table 7-4: DataColumn Constructors

Table 7-5: DataColumn Properties

Table 7-6: DataRow Properties

Table 7-7: Rows.Add Methods

Table 7-8: DataRowState Values

Table 7-9: Constraint Properties

Table 7-10: ForeignKeyConstraint Properties

Table 7-11: Action Rules

Microsoft ADO.Net – Step by Step 407

Table 7-12: UniqueConstraint Properties

Table 7-13: DataTable Methods

Table 7-14: DataRow Methods

Table 7-15: DataTable Events

Chapter 8: The DataView

Table 8-1: DataRowView Properties

Table 8-2: DataView Constructors

Table 8-3: DataView Properties

Table 8-4: Aggregate Functions

Table 8-5: Comparison Operators

Table 8-6: Arithmetic Operators

Table 8-7: Special Functions

Table 8-8: DataViewRowState Values

Table 8-9: DataView Methods

Chapter 9: Editing and Updating Data

Table 9-1: DataRowStates

Table 9-2: DataRowVersions

Table 9-3: Remove Methods

Table 9-4: DataRow Item Properties

Table 9-5: DbDataAdapter Update Methods

Table 9-6: UpdateRowSource Values

Chapter 10: ADO.NET Data-Binding in Windows Forms

Table 10-1: BindingContext Properties

Table 10-2: CurrencyManager Properties

Table 10-3: CurrencyManager Methods

Table 10-4: CurrencyManager Events

Table 10-5: Binding Properties

Table 10-6: BindingMemberInfo Properties

Table 10-7: Binding Events

Chapter 11: Using ADO.NET in Windows Forms

Table 11-1: ConvertEventArgs Properties

Chapter 12: Data-Binding in Web Forms

Table 12-1: Eval Methods

Chapter 13: Using ADO.NET in Web Forms

Table 13-1: ItemCommand Event Arguments

Table 13-2: DataGrid Column Types

Table 13-3: DataGrid Events

Table 13-4: DataGrid Paging Methods

Table 13-5: Validation Controls

Chapter 14: Using the XML Designer

Table 14-1: Microsoft Schema Extension Properties

Table 14-2: XML Schema Properties

Table 14-3: Referential Integrity Rules

Table 14-4: XML Schema Element Properties

Table 14-5: Microsoft Schema Extension Element Properties

Table 14-6: Simple Type Derivation Methods

Table 14-7: Data Type Facets

Table 14-8: Element Group Types

Table 14-9: Attribute Properties

Chapter 15: Reading and Writing XML

Table 15-1: ReadXmlSchema Methods

Table 15-2: InferXmlSchema Methods

Table 15-3: ReadXml Methods

Table 15-4: ReadXMLMode Values

Table 15-5: WriteXmlSchema Methods

Table 15-6: WriteXml Methods

Table 15-7: WriteXMLMode Values

Microsoft ADO.Net – Step by Step 408

Table 15-8: Column MappingType Values

Chapter 16: Using ADO in the .NET Framework

Table 16-1: COM Data Type Marshaling

Table 16-2: ADO Data Type Marshaling

Table 16-3: OleDbDataAdapter Fill Methods

List of Sidebars

Chapter 2: Creating Connections

Database References

Using Dynamic Properties

Connection Pooling

Chapter 8: The DataView

DataViewManagers

Chapter 9: Editing and Updating Data

Concurrency

Chapter 10: ADO.NET Data-Binding in Windows Forms

Data Sources

Chapter 12: Data-Binding in Web Forms

Data Sources

Microsoft ADO.Net – Step by Step 409

Dynamic Data CenterGuidance for Hosting Providers

 

Dynamic Data Center

Guidance for Hosting Providers

 

 

 

 

 

 

© 2009 Microsoft Corporation. All rights reserved. The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication and is subject to change at any time without notice to you. This document and its contents are provided AS IS without warranty of any kind, and should not be interpreted as an offer or commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Microsoft, Active Directory, Hyper-V, Silverlight, SQL Server, Windows, Windows Powershell, and Windows Server are trademarks of the Microsoft group of companies.

All other trademarks are property of their respective owners.

The descriptions of other companies’ products in this document, if any, are provided only as a convenience to you. Any such references should not be considered an endorsement or support by Microsoft. Microsoft cannot guarantee their accuracy, and the products may change over time. Also, the descriptions are intended as brief highlights to aid understanding, rather than as thorough coverage. For authoritative descriptions of these products, please consult their respective manufacturers.

Microsoft will not knowingly provide advice that conflicts with local, regional, or international laws; however, it is your responsibility to confirm that your implementation of any advice is in accordance with all applicable laws.

 

 

Delivering Business-Critical Solutions with SharePoint 2010


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Disclaimer

The information contained in this document represents the current plans of Microsoft Corporation on the issues presented at the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. Schedules and features contained in this document are subject to change.

Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the expressed written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give any license or rights to these patents, trademarks, copyrights, or other intellectual property.

© 2011 Microsoft Corporation. All rights reserved.

Microsoft, the Microsoft logo, Access, Excel, Outlook, SharePoint, Visio, and other product names are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

 

 

Table of Contents

Who Should Read This White Paper?    3

Challenge: Siloed Information and Processes Limit Business Performance and
Consume IT Resources    4

Surface LOB Data in SharePoint 2010    5

Business Connectivity Services    5

Implement, Extend, and Improve Business Processes by Building Solutions on
SharePoint 2010    6

Business Solutions in SharePoint 2010    6

IT-Managed Solutions    6

Advanced User and Information Worker Solutions    7

Increase Productivity with Enterprise Search and Business Intelligence    9

Use the SharePoint Platform to Speed ROI and Decrease Business Risk    10

Conclusion    11

Resources    12

 

Who Should Read This White Paper?

Business users at every level of your organization should have access to important data and be connected to processes that enable them to support operations. However, critical business data often is stored in disparate systems and ad hoc processes block efficiencies. Users call on IT to assist them in reaching and reconciling this business data, which can divert important IT resources away from strategic work that positions IT as a business partner, rather than a cost center.

This white paper is intended for Chief Information Officers, Chief Technical Officers, infrastructure managers and information system managers who want to deliver business-critical solutions while driving business value and reducing business risk. These solutions can be implemented across the organization and empower business users to get more value out of line-of-business (LOB) systems, thus extending the reach of important business data and improving business processes. This can be accomplished by connecting LOB applications to Microsoft® SharePoint® 2010 to organize and facilitate broad access to previously siloed information.

This white paper explains how to:

  • Increase access to critical backend business data by surfacing it in SharePoint 2010.
  • Enhance the effectiveness of business processes by building on SharePoint 2010 and using its platform capabilities.
  • Deliver fast return on investment (ROI) by lowering business risk, decreasing training costs and enhancing compliance.

Challenge: Siloed Information and Processes Limit Business Performance and Consume IT Resources

Most business-based IT applications are built for and deployed within vertical business functions, such as product lifecycle management (PLM) for design and engineering, customer relationship management (CRM) for sales and service, and enterprise resource planning (ERP) for finance and human resources.

Your organization is familiar with—or has even deployed—a number of these solutions, like PeopleSoft, JDEdwards, Oracle Financials or SAP. These applications support structured decision making and processes within the business function; nevertheless, silos exist between the applications and organizational functions and processes. This environment can make it difficult for business users to drive collaborative decision making that crosses business units and spans multiple business functions.

Users who need access to the data that resides in these systems make frequent requests to IT to extract the information and organize it. These requests are compounded by users’ demands for anywhere, anytime access on a variety of devices, like laptops, smartphones, and tablets. This leaves IT to solve configuration, security, and compliance concerns, while stretching an already slim IT budget.

These issues become less critical when data is surfaced through a unified platform. SharePoint 2010 can connect a broad range of users to business data that currently resides in siloed systems and is accessible only to specialized users and IT professionals. It also can empower every user—from information workers to power users and professional developers—to build solutions based on this business data, solutions that streamline processes and result in better, faster decisions by the organization.

Surface LOB Data in SharePoint 2010

SharePoint 2010 offers a variety of ways for your organization to surface information buried in siloed LOB systems, so you can quickly begin to improve processes by building solutions that span departmental and even cross-organizational boundaries. For the purpose of this paper, we will highlight Business Connectivity Services (BCS), the SharePoint 2010 technology that provides the core line-of-business connectivity capabilities. However, there are other options for connecting your LOB applications to SharePoint, including Web Services using Windows Communication Foundation (WCF), and more.

Business Connectivity Services

Business Connectivity Services in SharePoint 2010 enables connectivity to external data sources, such as databases and LOB systems (Figure 1). When your LOB systems are connected to SharePoint, users can interact with business data from within the familiar Microsoft Office and SharePoint user interface, so they do not need to learn many complex applications to get their jobs done. This also enables IT to use a unified platform across a range of LOB systems to simplify administration and support.


Figure 1: BCS architecture diagram

Specifically, Business Connectivity Services can help your organization:

  • Bring external data into SharePoint and Office, helping users to read, edit, and write LOB data in familiar tools such as Microsoft Outlook®, Excel®, and Word. Example: An organization brings inventory data from its ERP system into SharePoint to give sales the ability to update the information in real time based on order changes.
  • Address users’ collaboration needs by extending SharePoint capabilities and the Office user experience to include business data and processes. Example: A manufacturing plant foreman searches SharePoint to identify his peers in other plants so that he can reach out and discuss a machinery issue.
  • Create fast, incremental user-driven solutions like workflows and templates that position IT as a strategic asset within the organization. Example: The service department leverages a pre-built workflow for problem resolution that reduces response time and increases customer satisfaction.

Implement, Extend, and Improve Business Processes by Building Solutions on SharePoint 2010

SharePoint 2010 allows IT to focus on executing high-priority projects that deliver strategic business advantages, while maintaining a stable infrastructure. This is because SharePoint provides the right environment for IT to meet business demands by enabling business-critical processes within and across organizational boundaries. In turn, users can become more empowered, translating into increased efficiencies for IT and improved productivity for your organization.

Certainly, user empowerment is key to keeping your organization agile and productive. You can increase organizational agility by using the SharePoint platform to help users access, visualize, and consume the business-critical data currently locked in LOB applications. Plus, IT can maintain control through centralized management and security tools, such as data storage management, backup, versioning, and records management.

It is important to remember that to achieve this level of business benefit, your organization must have a deployment plan that prioritizes business needs. By focusing on business needs, you can help to ensure that SharePoint 2010 is broadly adopted across the organization, which ultimately means that the right information can be delivered to the right people at the right time.

Business Solutions in SharePoint 2010

After connecting external systems to SharePoint 2010, you can begin to build solutions to improve the processes that are crucial to organizational success. SharePoint 2010 offers solution models to help your organization develop, improve, and extend business processes:

  • IT-managed solutions
  • Advanced user and information worker solutions

IT-Managed Solutions

When implementing or extending business-critical processes based on business data from operational systems, IT leads the way more often than not while ensuring that the right methodologies, security models and governance approach are being applied. Nevertheless, these IT-led project tend to be both complex and costly, making it hard for organizations to decide to take-on these challenges.

With SharePoint 2010, IT does not need to develop web applications from the ground up. Instead, they can use platform services to quickly create robust custom solutions. The typical application development lifecycle is a time-consuming and costly endeavor. Each application needs its own security model, workflow engine, repository for storing information, and more. SharePoint provides all of these capabilities out of the box. By building applications on top of SharePoint, your organization can get started faster and deliver value to the business more quickly.

The SharePoint partner ecosystem gives organizations access to an extensive range of solutions from independent software vendors (ISVs) as an alternative to writing custom code, thereby providing sophisticated solutions in a prebuilt package. In addition to the partner ecosystem, a community of systems integrators (SIs) can help users to plan and deploy these types of solutions.

Advanced User and Information Worker Solutions

Advanced users have a deeper understanding of the tools and technologies that IT professionals often use to develop and deploy solutions (for example, Microsoft SharePoint Designer, Microsoft Access® Services, and Microsoft Visio® Services). SharePoint integrates with design tools to give advanced users greater flexibility in building solutions, while still promoting quality assurance and allowing IT professionals to maintain control over finished products. Examples of advanced user solutions include creating a business connection from SharePoint to an LOB system and creating custom workflows to automate tedious business processes (Figure 2). To support the creation of these types of solutions, IT needs to identify advanced users and train them on self-service best practices, and establish governance to define how advanced user solutions are built and deployed.


Figure 2: Example of a workflow for an advanced user solution

 

Scenario: Advanced User Solution

Frank Zhang, a customer service representative for an engineer-to-order company, uses a SharePoint solution for order entry and change management. Business Connectivity Services provides integration with customer data, product catalogs, engineering specifications, on-hand stock availability, and pricing and discount information dynamically linked to various LOB sources, including master data repositories, engineering, and ERP.

Before implementing this process through SharePoint, Frank needed to access and assemble data from various systems to complete the order placement process. This often required exchanging multiple emails and Excel workbooks among various departments, such as Engineering, Manufacturing, and Finance. With simplified and integrated access to all needed data in his SharePoint-based workplace, Frank now typically can manage order completion on his own, with fewer errors and delays. When needed, automated workflows take care of the process across various cross-functional teams, with shared information maintained in a single workspace (such as through Excel Services), helping to eliminate the need for inefficient ad hoc communications.

Information workers can take advantage of out-of-the-box capabilities in SharePoint 2010 that increase productivity. For instance, they can leverage workflows and customizable views of their critical business data created by IT or advanced users. SharePoint 2010 also can create forms automatically based on templates within SharePoint and other Microsoft Office applications (Figure 3).


Figure 3: Example of a form for an information worker solution

Further, information workers can create lists and document libraries that allow them to collect information, collaborate on documents, and share information easily. IT is tasked with publishing templates for the most common solutions and with teaching users best practices for creating lists and collecting information. For additional information about the advantages of user-created solutions, refer to the resources at the end of this paper.

Scenario: Information Worker Solution

Nina Vietzen, a customer service representative, used Microsoft SharePoint and Word to create her own solution for tracking customer inquiries and associated documentation. Custom templates in Word allowed Nina to start with a standard document, while a centralized SharePoint framework provides document versioning, document metadata, and backup and restore.

Before Nina implemented this process through SharePoint, users had to create new Word documents for each customer inquiry and store them in a file share without version control. Now, versioning, metadata and search, and central backups provide Nina and her colleagues with a time-saving solution that keeps documents safe.

Increase Productivity with Enterprise Search and Business Intelligence

SharePoint 2010 includes multiple capabilities with built-in security and manageability that IT can deploy to help improve business user productivity based on accessing and visualizing the business data. Two of these key capabilities are Search and Insights.

SharePoint Search enables cross-platform search to help business users consume and manage important business data. SharePoint 2010 Search provides an interactive, visual search experience. Visual cues help people find information quickly, while refiners let them drill into the results and discover insights.

Example: An account manager receives a customer request to adjust a custom order. Before responding to her customer, she must determine whether any of her organization’s warehouses have the items in stock to amend the order. Her ERP system is connected to SharePoint 2010, so she opens up her team portal, searches for the part, and finds that it is available in two warehouses.

SharePoint 2010 Insights provides interactive dashboards and scorecards that can help people to define and measure success: key metrics can be matched to specific strategies and then shared, tracked, and discussed. Users can create meaningful visualizations that convey the right information the first time, aggregating content from multiple sources and displaying it in a web browser in an understandable and collaborate environment. Moreover, rich interactivity allows users to analyze up-to-the-minute information and work with data quickly and easily to identify key opportunities and trends. Figure 4 (next page) shows a user’s dashboard in SharePoint 2010 Insights.


Figure 4: Dashboard in SharePoint 2010 Insights

Use the SharePoint Platform to Speed ROI and Decrease Business Risk

Using SharePoint 2010 to surface business data from your LOB systems and build solutions can increase the ROI of your legacy systems, speed solutions’ time-to-market, and empower users to help themselves—all of which frees IT resources to focus on more strategic initiatives.

Business units can reduce training costs because SharePoint 2010 offers the familiar Microsoft Office experience that enables people to quickly and easily adopt SharePoint (as opposed to training users on a variety of more complex LOB applications).

SharePoint can speed time-to-market of otherwise time-consuming and resource-intensive solutions to streamline business-critical processes. In addition, powerful Search and BI capabilities provide self-service functionality, which boosts productivity, reduces costs, and increases user satisfaction.

Finally, SharePoint can help to reduce your organization’s overall risk by increasing the visibility of business-critical data. The ability to access accurate, real-time business data has a major impact on your organization. In his 2009 white paper, “Business Intelligence: A Guide for Midsize Companies,” MAS Strategies’ Founder and Principal Analyst Michael Schiff said, “All employees have the responsibility to make the best decisions possible, based upon the data available to them at that time. If their ability to analyze this data and transform it into useful information is improved, the overall quality of their decisions can be improved as well.”

When you surface all relevant data to the people who need it when they need it, you enable them to make better decisions faster. This can reduce mistakes that result from misinformation and decrease your organization’s business risk.

SharePoint also reduces risk by enhancing security, privacy, and compliance through a flexible authentication model. This authentication model can help your organization to maximize its SharePoint 2010 deployment while maintaining highly secure control over corporate assets to increase compliance.

Conclusion

This white paper has discussed extending the reach of your business-critical data across departmental and organizational boundaries to improve business-critical solutions. It also has shown the benefits of surfacing and visualizing this data in SharePoint 2010:

  • Surface LOB Data in SharePoint 2010: Identify business-critical data and the users who need it, and extend the reach of your data by connecting SharePoint to your LOB applications. SharePoint 2010 provides many ways to achieve this state, faster and easier than in previous versions and without complex, expensive custom development.
  • Implement, Extend, and Improve Business Processes: Find and visualize the information you need in SharePoint 2010. Take advantage of out-of-the-box platform capabilities like collaboration, social computing, and content management to enable the right people to access the right information at the right time. IT can design and administer solutions quickly so that users can build their own templates and workflows to connect business data to their processes.
  • Gain Additional Productivity with SharePoint 2010: SharePoint provides several capabilities, including Search and Insights, that can help organizations to improve workforce productivity and visualize their business data in real-time. These capabilities have built-in security and manageability to help ensure safe and easy use.
  • Speed ROI and Decrease Risk: Connecting SharePoint 2010 to your LOB applications can increase the ROI of these systems and decrease business risk by surfacing important data across the organization to users who need it, when they need it. Plus, out-of-the-box capabilities in SharePoint can speed the time-to-market of previously labor-intensive solutions. SharePoint also can reduce IT administrator and end user training costs by enabling users to access information through a familiar interface.

 

Resources

Learn more about the SharePoint capabilities outlined in this white paper by visiting the following:

SharePoint Deployment on Windows Azure Virtual Machines

DISCLAIMER

This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. 

Some examples are for illustration only and are fictitious. No real association is intended or inferred.

This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.

2012 Microsoft Corporation.  All rights reserved. 

 

Table of Contents

 

Executive Summary    4

Who Should Read This Paper?    4

Why Read This Paper?    4

Shift to Cloud Computing    5

Delivery Models for Cloud Services    6

Windows Azure Virtual Machines    7

SharePoint on Windows Azure Virtual Machines    7

Shift in IT Focus    8

Faster Deployment    8

Scalability    8

Metered Usage    8

Flexibility    9

Provisioning Process    9

Deploying SharePoint 2010 on Windows Azure    10

Creating and Uploading a Virtual Hard Disk    15

Usage Scenarios    16

Scenario 1: Simple SharePoint Development and Test Environment    16

Scenario 2: Public-facing SharePoint Farm with Customization    18

Scenario 3: Scaled-out Farm for Additional BI Services    20

Scenario 4: Completely Customized SharePoint-based Website    22

Conclusion    25

Additional Resources    25

 

 

Executive Summary

Microsoft SharePoint Server 2010 provides rich deployment flexibility, which can help organizations determine the right deployment scenarios to align with their business needs and objectives. Hosted and managed in the cloud, the Windows Azure Virtual Machines offering provides complete, reliable, and available infrastructure to support various on-demand application and database workloads, such as Microsoft SQL Server and SharePoint deployments.

While Windows Azure Virtual Machines support multiple workloads, this paper focuses on SharePoint deployments. Windows Azure Virtual Machines enable organizations to create and manage their SharePoint infrastructure quickly—provisioning and accessing nearly any host universally. It allows full control and management over processors, RAM, CPU ranges, and other resources of SharePoint virtual machines (VMs).

Windows Azure Virtual Machines mitigate the need for hardware, so organizations can turn attention from handling high upfront cost and complexity to building and managing infrastructure at scale. This means that they can innovate, experiment, and iterate in hours—as opposed to days and weeks with traditional deployments.

Who Should Read This Paper?

This paper is intended for IT professionals. Furthermore, technical decision makers, such as architects and system administrators, can use this information and the provided scenarios to plan and design a virtualized SharePoint infrastructure on Windows Azure.

Why Read This Paper?

This paper explains how organizations can set up and deploy SharePoint within Windows Azure Virtual Machines. It also discusses why this type of deployment can be beneficial to organizations of many sizes.

 

Shift to Cloud Computing

According to Gartner, cloud computing is defined as a “style of computing where massively scalable IT-enabled capabilities are delivered ‘as a service’ to external customers using Internet technologies.” The significant words in this definition are scalable, service, and Internet. In short, cloud computing can be defined as IT services that are deployed and delivered over the Internet and are scalable on demand.

Undeniably, cloud computing represents a major shift happening in IT today. Yesterday, the conversation was about consolidation and cost. Today, it’s about the new class of benefits that cloud computing can deliver. It’s all about transforming the way IT serves organizations by harnessing a new breed of power. Cloud computing is fundamentally changing the world of IT, impacting every role—from service providers and system architects to developers and end users.

Research shows that agility, focus, and economics are three top drivers for cloud adoption:

  • Agility: Cloud computing can speed an organization’s ability to capitalize on new opportunities and respond to changes in business demands.
  • Focus: Cloud computing enables IT departments to cut infrastructure costs dramatically. Infrastructure is abstracted and resources are pooled, so IT runs more like a utility than a collection of complicated services and systems. Plus, IT now can be transitioned to more innovative and strategic roles.
  • Economics: Cloud computing reduces the cost of delivering IT and increases the utilization and efficiency of the data center. Delivery costs go down because with cloud computing, applications and resources become self-service, and use of those resources becomes measurable in new and precise ways. Hardware utilization also increases because infrastructure resources (storage, compute, and network) are now pooled and abstracted.

Delivery Models for Cloud Services

In simple terms, cloud computing is the abstraction of IT services. These services can range from basic infrastructure to complete applications. End users request and consume abstracted services without the need to manage (or even completely know about) what constitutes those services. Today, the industry recognizes three delivery models for cloud services, each providing a distinct trade-off between control/flexibility and total cost:

  • Infrastructure as a Service (IaaS): Virtual infrastructure that hosts virtual machines and mostly existing applications.
  • Platform as a Service (PaaS): Cloud application infrastructure that provides an on-demand application-hosting environment.
  • Software as a Service (SaaS): Cloud services model where an application is delivered over the Internet and customers pay on a per-use basis (for example, Microsoft Office 365 or Microsoft CRM Online).

Figure 1 depicts the cloud services taxonomy and how it maps to the components in an IT infrastructure. With an on-premises model, the customer is responsible for managing the entire stack—ranging from network connectivity to applications. With IaaS, the lower levels of the stack are managed by a vendor, while the customer is responsible for managing the operating system through applications. With PaaS, a platform vendor provides and manages everything from network connectivity through runtime. The customer only needs to manage applications and data. (The Windows Azure offering best fits in this model.) Finally, with SaaS, a vendor provides the applications and abstracts all services from all underlying components.

Figure 1: Cloud services taxonomy


Windows Azure Virtual Machines

Windows Azure Virtual Machines introduce functionality that allows full control and management of VMs, along with extensive virtual networking. This offering can provide organizations with robust benefits, such as:

  • Management: Centrally manage VMs in the cloud with full control to configure and maintain the infrastructure.
  • Application mobility: Move virtual hard drives (VHDs) back and forth between on-premises and cloud-based environments. There is no need to rebuild applications to run in the cloud.
  • Access to Microsoft server applications: Run the same on-premises applications and infrastructure in the cloud, including Microsoft SQL Server, SharePoint Server, Windows Server, and Active Directory.

Windows Azure Virtual Machines is an easy, open and flexible, and powerful platform that allows organizations to deploy and run Windows Server and Linux VMs in minutes:

  • Easy: With Windows Azure Virtual Machines, it is easy and simple to build, migrate, deploy, and manage VMs in the cloud. Organizations can migrate workloads to Windows Azure without having to change existing code, or they can set up new VMs in Windows Azure in only a few clicks. The offering also provides assistance for new cloud application development by integrating the IaaS and PaaS functionalities of Windows Azure.
  • Open and flexible: Windows Azure is an open platform that gives organizations flexibility. They can start from a prebuilt image in the image library, or they can create and use customized and on-premises VHDs and upload them to the image library. Community and commercial versions of Linux also are available.
  • Powerful: Windows Azure is an enterprise-ready cloud platform for running applications such as SQL Server, SharePoint Server, or Active Directory in the cloud. Organizations can create hybrid on-premises and cloud solutions with VPN connectivity between the Windows Azure data center and their own networks.

SharePoint on Windows Azure Virtual Machines

SharePoint 2010 flexibly supports most of the workloads in a Windows Azure Virtual Machines deployment. Windows Azure Virtual Machines are an optimal fit for FIS (SharePoint Server for Internet Sites) and development scenarios. Likewise, core SharePoint workloads are also supported. If an organization wants to manage and control its own SharePoint 2010 implementation while capitalizing on options for virtualization in the cloud, Windows Azure Virtual Machines are ideal for deployment.

The Windows Azure Virtual Machines offering is hosted and managed in the cloud. It provides deployment flexibility and reduces cost by mitigating capital expenditures due to hardware procurement. With increased infrastructure agility, organizations can deploy SharePoint Server in hours—as opposed to days or weeks. Windows Azure Virtual Machines also enables organizations to deploy SharePoint workloads in the cloud using a “pay-as-you-go” model.
As SharePoint workloads grow, an organization can rapidly expand infrastructure; then, when computing needs decline, it can return the resources that are no longer needed—thereby paying only for what is used.

Shift in IT Focus

Many organizations contract out the common components of their IT infrastructure and management, such as hardware, operating systems, security, data storage, and backup—while maintaining control of mission-critical applications, such as SharePoint Server. By delegating all non-mission-critical service layers of their IT platforms to a virtual provider, organizations can shift their IT focus to core, mission-critical SharePoint services and deliver business value with SharePoint projects, instead of spending more time on setting up infrastructure.

Faster Deployment

Supporting and deploying a large SharePoint infrastructure can hamper IT’s ability to move rapidly to support business requirements. The time that is required to build, test, and prepare SharePoint servers and farms and deploy them into a production environment can take weeks or even months, depending on the processes and constraints of the organization. Windows Azure Virtual Machines allow organizations to quickly deploy their SharePoint workloads without capital expenditures for hardware. In this way, organizations can capitalize on infrastructure agility to deploy in hours instead of days or weeks.

Scalability

Without the need to deploy, test, and prepare physical SharePoint servers and farms, organizations can expand and contract compute capacity on demand, at a moment’s notice. As SharePoint workload requirements grow, an organization can rapidly expand its infrastructure in the cloud. Likewise, when computing needs decrease, the organization can diminish resources, paying only for what it uses. Windows Azure Virtual Machines reduces upfront expenses and long-term commitments, enabling organizations to build and manage SharePoint infrastructures at scale. Again, this means that these organizations can innovate, experiment, and iterate in hours—as opposed to days and weeks with traditional deployments.

Metered Usage

Windows Azure Virtual Machines provide computing power, memory, and storage for SharePoint scenarios, whose prices are typically based on resource consumption. Organizations pay only for what they use, and the service provides all capacity needed for running the SharePoint infrastructure. For more information on pricing and billing, go to Windows Azure Pricing Details. Note that there are nominal charges for storage and data moving out of the Windows Azure cloud from an on-premises network. However, Windows Azure does not charge for uploading data.

Flexibility

Windows Azure Virtual Machines provide developers with the flexibility to pick their desired language or runtime environment, with official support for .NET, Node.js, Java, and PHP. Developers also can choose their tools, with support for Microsoft Visual Studio, WebMatrix, Eclipse, and text editors. Further, Microsoft delivers a low-cost, low-risk path to the cloud and offers cost-effective, easy provisioning and deployment for cloud reporting needs—providing access to business intelligence (BI) across devices and locations. Finally, with the Windows Azure offering, users not only can move VHDs to the cloud, but also can copy a VHD back down and run it locally or through another cloud provider, as long as they have the appropriate license.

Provisioning Process

This subsection discusses the basic provisioning process in Windows Azure. The image library in Windows Azure provides the list of available preconfigured VMs. Users can publish SharePoint Server, SQL Server, Windows Server, and other ISO/VHDs to the image library. To simplify the creation of VMs, base images are created and published to the library. Authorized users can use these images to generate the desired VM. For more information, go to Create a Virtual Machine Running Windows Server 2008 R2 on the Windows Azure site. Figure 2 shows the basic steps for creating a VM using the Windows Azure Management Portal:

Figure 2: Overview of steps for creating a VM


Users also can upload a sysprepped image on the Windows Azure Management Portal. For more information, go to Creating and Uploading a Virtual Hard Disk. Figure 3 shows the basic steps for uploading an image to create a VM:

Figure 3: Overview of steps for uploading an image

Deploying SharePoint 2010 on Windows Azure

You can deploy SharePoint 2010 on Windows Azure by following these steps:

  1. Log on to the Windows Azure (Preview) Management Portal through your account.
  1. Create a VM with base operating system: On the Windows Azure Management Portal, click +NEW, then click VIRTUAL MACHINE, and then click FROM GALLERY.

  2. The VM OS Selection dialog box appears. Click Platform Images, select the Windows Server 2008 R2 SP1 platform image.

 

  1. The VM Configuration dialog box appears. Provide the following information:
  • Enter a VIRTUAL MACHINE NAME.
    • This machine name should be globally unique.
  • Leave the NEW USER NAME box as Administrator.
  • In the NEW PASSWORD box, type a strong password.
  • In the CONFIRM PASSWORD box, retype the password.
  • Select the appropriate SIZE.
    • For a production environment (SharePoint application server and database), it is recommended to use Large (4 Core, 7GB memory).

  1. The VM Mode dialog box appears. Provide the following information:
  • Select Standalone Virtual Machine.
  • In the DNS NAME box, provide the first portion of a DNS name of your choice.
    • This portion will complete a name in the format MyService1.cloudapp.net.
  • In the STORAGE ACCOUNT box, choose one of the following:
    • Select a storage account where the VHD file is stored.
    • Choose to have a storage account automatically created.
      • Only one storage account per region is automatically created. All other VMs created with this setting are located in this storage account.
      • You are limited to 20 storage accounts.
      • For more information, go to Create a Storage Account in Windows Azure.

 

  • In the REGION/AFFINITY GROUP/VIRTUAL NETWORK box, select the region where the virtual image will be hosted.

  1. The VM Options dialog box appears. Provide the following information:
  • In the AVAILABILITY SET box, select (none).
  • Read and accept the legal terms.
  • Click the checkmark to create the VM.

 

  1. The VM Instances page appears. Verify that your VM was created successfully.

  2. Complete VM setup:
  • Open the VM using Remote Desktop.
  • On the Windows Azure Management Portal, select your VM, and then select the DASHBOARD page.
  • Click Connect.

  1. Build the SQL Server VM using any of the following options:
  • Create a SQL Server 2012 VM by following steps 1 to 7 above—except in step 3, use the SQL Server 2012 image instead of the Windows Server 2008 R2 SP1 image. For more information, go to Provisioning a SQL Server Virtual Machine on Windows Azure.
    • When you choose this option, the provisioning process keeps a copy of SQL Server 2012 setup files in the C:\SQLServer_11.0_Full directory path so that you can customize the installation. For example, you can convert the evaluation installation of SQL Server 2012 to a licensed version by using your license key.
  • Use the SQL Server System Preparation (SysPrep) tool to install SQL Server on the VM with base operating system (as shown above in steps 1 to 7). For more information, go to Install SQL Server 2012 Using SysPrep.
  • Use the Command Prompt to install SQL Server. For more information, go to Install SQL Server 2012 from the Command Prompt.
  • Use supported SQL Server media and your license key to install SQL Server on the VM with base operating system (as shown above in steps 1 to 7).
  1. Build the SharePoint farm using the following substeps:
  • Substep 1: Configure the Windows Azure subscription using script files.
  • Substep 2: Provision SharePoint servers by creating another VM with base operating system (as shown above in steps 1 to 7). To build a SharePoint server on this VM, choose one of the following options:
  • Substep 3: Configure SharePoint. After each SharePoint VM is in the ready state, configure SharePoint Server on each server by using one of the following options:
    • Configure SharePoint from the GUI.
    • Configure SharePoint using Windows PowerShell. For more information, go to Install SharePoint Server 2010 by Using Windows PowerShell.
      • You also can use the CodePlex Project’s AutoSPInstaller, which consists of Windows PowerShell scripts, an XML input file, and a standard Microsoft Windows batch file. AutoSPInstaller provides a framework for a SharePoint 2010 installation script based on Windows PowerShell. For more information, go to CodePlex: AutoSPInstaller.

  1. After the script gets completed, connect to the VM using the VM Dashboard.
  2. Verify SharePoint configuration: Log on to the SharePoint server, and then use Central Administration to verify the configuration.

Creating and Uploading a Virtual Hard Disk

You also can create your own images and upload them to Windows Azure as a VHD file. To create and upload a VHD file on Windows Azure, follow these steps:

  1. Create the Hyper-V-enabled image: Use Hyper-V Manager to create the Hyper-V-enabled VHD. For more information, go to Create Virtual Hard Disks.
  2. Create a storage account in Windows Azure: A storage account in Windows Azure is required to upload a VHD file that can be used for creating a VM. This account can be created using the Windows Azure Management Portal. For more information, go to Create a Storage Account in Windows Azure.
  3. Prepare the image to be uploaded: Before the image can be uploaded to Windows Azure, it must be generalized using the SysPrep command. For more information, go to How to Use SysPrep: An Introduction.
  4. Upload the image to Windows Azure: To upload an image contained in a VHD file, you must create and install a management certificate. Obtain the thumbprint of the certificate and the subscription ID. Set the connection and upload the VHD file using the CSUpload command-line tool. For more information, go to Upload the Image to Windows Azure.

 

Usage Scenarios

This section discusses some leading customer scenarios for SharePoint deployments using Windows Azure Virtual Machines. Each scenario is divided into two parts—a brief description about the scenario followed by steps for getting started.

Scenario 1: Simple SharePoint Development and Test Environment

Description

Organizations are looking for more agile ways to create SharePoint applications and set up SharePoint environments for onshore/offshore development and testing. Fundamentally, they want to shorten the time required to set up SharePoint application development projects, and decrease cost by increasing the use of their test environments. For example, an organization might want to perform on-demand load testing on SharePoint Server and execute user acceptance testing (UAT) with more concurrent users in different geographic locations. Similarly, integrating onshore/offshore teams is an increasingly important business need for many of today’s organizations.

This scenario explains how organizations can use preconfigured SharePoint farms for development and test workloads. A SharePoint deployment topology looks and feels exactly as it would in an on-premises virtualized deployment. Existing IT skills translate 1:1 to a Windows Azure Virtual Machines deployment, with the major benefit being an almost complete cost shift from capital expenditures to operational expenditures—no upfront physical server purchase is required. Organizations can eliminate the capital cost for server hardware and achieve flexibility by greatly reducing the provisioning time required to create, set up, or extend a SharePoint farm for a testing and development environment. IT can dynamically add and remove capacity to support the changing needs of testing and development. Plus, IT can focus more on delivering business value with SharePoint projects and less on managing infrastructure.

To fully utilize load-testing machines, organizations can configure SharePoint virtualized development and test machines on Windows Azure with operating system support for Windows Sever 2008 R2. This enables development teams to create and test applications and easily migrate to on-premises or cloud production environments without code changes. The same frameworks and toolsets can be used on premises and in the cloud, allowing distributed team access to the same environment. Users also can access on-premises data and applications by establishing a direct VPN connection.

Getting Started

Figure 4 shows a SharePoint development and testing environment in a Windows Azure VM. To build this deployment, start by using the same on-premises SharePoint development and testing environment used to develop applications. Then, upload and deploy the applications to the Windows Azure VM for testing and development. If your organization decides to move the application back on-premises, it can do so without having to modify the application.

 

Figure 4: SharePoint development and testing environment in Windows Azure Virtual Machines


Setting Up the Scenario Environment

To implement a SharePoint development and testing environment on Windows Azure, follow these steps:

  1. Provision: First, provision a VPN connection between on-premises and Windows Azure using Windows Azure Virtual Network. (Because Active Directory is not being used here, a VPN tunnel is needed.) For more information, go to Windows Azure Virtual Network (Design Considerations and Secure Connection Scenarios). Then, use the Management Portal to provision a new VM using a stock image from the image library.
  • You can upload the on-premises SharePoint development and testing VMs to your Windows Azure storage account and reference those VMs through the image library for building the required environment.
  • You can use the SQL Server 2012 image instead of the Windows Server 2008 R2 SP1 image. For more information, go to Provisioning a SQL Server Virtual Machine on Windows Azure.
  1. Install: Install SharePoint Server, Visual Studio, and SQL Server on the VMs using a Remote Desktop connection.
  1. Develop deployment packages and scripts for applications and databases: If you plan to use an available VM from the image library, the desired on-premises applications and databases can be deployed on Windows Azure Virtual Machines:
  • Create deployment packages for the existing on-premises applications and databases using SQL Server Data Tools and Visual Studio.
  • Use these packages to deploy the applications and databases on Windows Azure Virtual Machines.
  1. Deploy SharePoint applications and databases:
  • Configure security on the Management Portal endpoint and set an inbound port in the VM’s Windows Firewall.
  • Deploy SharePoint applications and databases to Windows Azure Virtual Machines using the deployment packages and scripts created in step 3.
  • Test deployed applications and databases.
  1. Manage VMs:
  • Monitor the VMs using the Management Portal.
  • Monitor the applications using Visual Studio and SQL Server Management Studio.
  • You also can monitor and manage the VMs using on-premises management software, like Microsoft System Center – Operations Manager.

Scenario 2: Public-facing SharePoint Farm with Customization

Description

Organizations want to create an Internet presence that is hosted in the cloud and is easily scalable based on need and demand. They also want to create partner extranet websites for collaboration and implement an easy process for distributed authoring and approval of website content. Finally, to handle increasing loads, these organizations want to provide capacity on demand to their websites.

In this scenario, SharePoint Server is used as the basis for hosting a public-facing website. It enables organizations to rapidly deploy, customize, and host their business websites on a secure, scalable cloud infrastructure. With SharePoint public-facing websites on Windows Azure, organizations can scale as traffic grows and pay only for what they use. Common tools, similar to those used on premises, can be used for content authoring, workflow, and approval with SharePoint on Windows Azure.

Further, using Windows Azure Virtual Machines, organizations can easily configure staging and production environments running on VMs. SharePoint public-facing VMs created in Windows Azure can be backed up to virtual storage. In addition, for disaster recovery purposes, the Continuous Geo-Replication feature allows organizations to automatically back up VMs operating in one data center to another data center miles away. (For more information on geo-replication, go to Introducing Geo-replication for Windows Azure Storage).

VMs in Windows Azure infrastructure are validated and supported for working with other Microsoft products, such as SQL Server and SharePoint Server. Windows Azure and SharePoint Server are better together: Both are part of the Microsoft family and are thoroughly integrated, supported, and tested together to provide an optimal experience. They both have a single point of support for the SharePoint application and the Windows Azure infrastructure.

Getting Started

In this scenario, more front-end web servers for SharePoint Server must be added to support extra traffic. These servers require enhanced security and Active Directory Domain Services domain controllers to support user authentication and authorization. Figure 5 shows the layout for this scenario.

Figure 5: Public-facing SharePoint farm with customization


Setting Up the Scenario Environment

To implement a public-facing SharePoint farm on Windows Azure, follow these steps:

  1. Deploy Active Directory: The fundamental requirements for deploying Active Directory on Windows Azure Virtual Machines are similar—but not identical—to deploying it on VMs (and, to some extent, physical machines) on-premises. For more information about the differences, as well as guidelines and other considerations, go to Guidelines for Deploying Active Directory on Windows Azure Virtual Machines. To deploy Active Directory in Windows Azure:
  1. Provision a VM: Use the Management Portal to provision a new VM from a stock image in the image library.
  2. Deploy a SharePoint farm:
  • Use the newly provisioned VM to install SharePoint and generate a reusable image. For more information about installing SharePoint Server, go to Install and Configure SharePoint Server 2010 by Using Windows PowerShell or CodePlex: AutoSPInstaller.
  • Configure the SharePoint VM to create and connect to the SharePoint farm.
  • Use the Management Portal to configure the load balancing.
    • Configure the VM endpoints, select the option to load balance traffic on an existing endpoint, and then specify the name of the load-balanced VM.
    • Add another front-end web VM to the existing SharePoint farm for extra traffic.
  1. Manage VMs:

  • Monitor the VMs using the Management Portal.
  • Monitor the SharePoint farm using Central Administration.

Scenario 3: Scaled-out Farm for Additional BI Services

Description

Business intelligence is essential to gaining key insights and making rapid, sound decisions. As organizations transition from an on-premises approach, they do not want to make changes to the BI environment while deploying existing BI applications to the cloud. They want to host reports from SQL Server Analysis Services (SSAS) or SQL Server Reporting Services (SSRS) in a highly durable and available environment, while keeping full control of the BI application—all without spending much time and budget on maintenance.

This scenario describes how organizations can use Windows Azure Virtual Machines to host mission-critical BI applications. Organizations can deploy SharePoint farms in Windows Azure Virtual Machines and scale out the application server VM’s BI components, like SSRS or Excel Services. By scaling resource-intensive components in the cloud, they can better and more easily support specialized workloads. Note that SQL Server in Windows Azure Virtual Machines performs well, as it is easy to scale SQL Server instances, ranging from small to extra-large installations. This provides elasticity, enabling organizations to dynamically provision (expand) or deprovision (shrink) BI instances based on immediate workload requirements.

Migrating existing BI applications to Windows Azure provides better scaling. With the power of SSAS, SSRS, and SharePoint Server, organizations can create powerful BI and reporting applications and dashboards that scale up or down. These applications and dashboards also can be more securely integrated with on-premises data and applications. Windows Azure ensures data center compliance with support for ISO 27001. For more information, go to the Windows Azure Trust Center.

Getting Started

To scale out the deployment of BI components, a new application server with services such as PowerPivot, Power View, Excel Services, or PerformancePoint Services must be installed. Or, SQL Server BI instances like SSAS or SSRS must be added to the existing farm to support additional query processing. The server can be added as a new Windows Azure VM with SharePoint 2010 Server or SQL Server installed. Then, the BI components can be installed, deployed, and configured on that server (Figure 6).

Figure 6: Scaled-out SharePoint farm for additional BI services

Setting Up the Scenario Environment

To scale out a BI environment on Windows Azure, follow these steps:

  1. Provision:
  • Provision a VPN connection between on premises and Windows Azure using Windows Azure Virtual Network. For more information, go to Windows Azure Virtual Network (Design Considerations and Secure Connection Scenarios).
  • Use the Management Portal to provision a new VM from a stock image in the image library.
    • You can upload SharePoint Server or SQL Server BI workload images to the image library, and any authorized user can pick those BI component VMs to build the scaled-out environment.
  1. Install: If your organization does not have prebuilt images of SharePoint Server or SQL Server BI components, install SharePoint Server and SQL Server on the VMs using a Remote Desktop connection.
  1. Add the BI VM:
  • Configure security on the Management Portal endpoint and set an inbound port in the VM’s Windows Firewall.
  • Add the newly created BI VM to the existing SharePoint or SQL Server farm.
  1. Manage VMs:
  • Monitor the VMs using the Management Portal.
  • Monitor the SharePoint farm using Central Administration.
  • Monitor and manage the VMs using on-premises management software like Microsoft System Center – Operations Manager.

Scenario 4: Completely Customized SharePoint-based Website

Description

Increasingly, organizations want to create fully customized SharePoint websites in the cloud. They need a highly durable and available environment that offers full control to maintain complex applications running in the cloud, but they do not want to spend a large amount of time and budget.

In this scenario, an organization can deploy its entire SharePoint farm in the cloud and dynamically scale all components to get additional capacity, or it can extend its on-premises deployment to the cloud to increase capacity and improve performance, when needed. The scenario focuses on organizations that want the full “SharePoint experience” for application development and enterprise content management. The more complex sites also can include enhanced reporting, Power View, PerformancePoint, PowerPivot, in-depth charts, and most other SharePoint site capabilities for end-to-end, full functionality.

Organizations can use Windows Azure Virtual Machines to host customized applications and associated components on a cost-effective and highly secure cloud infrastructure. They also can use on-premises Microsoft System Center as a common management tool for on-premises and cloud applications.

Getting Started

To implement a completely customized SharePoint website on Windows Azure, an organization must deploy an Active Directory domain in the cloud and provision new VMs into this domain. Then, a VM running SQL Server 2012 must be created and configured as part of a SharePoint farm. Finally, the SharePoint farm must be created, load balanced, and connected to Active Directory and SQL Server (Figure 7).

Figure 7: Completely customized SharePoint-based website


Setting Up the Scenario Environment

The following steps show how to create a customized SharePoint farm environment from prebuilt images available in the image library. Note, however, that you also can upload SharePoint farm VMs to the image library, and authorized users can choose those VMs to build the required SharePoint farm on Windows Azure.

  1. Deploy Active Directory: The fundamental requirements for deploying Active Directory on Windows Azure Virtual Machines are similar—but not identical—to deploying it on VMs (and, to some extent, physical machines) on premises. For more information about the differences, as well as guidelines and other considerations, go to Guidelines for Deploying Active Directory on Windows Azure Virtual Machines. To deploy Active Directory in Windows Azure:
  1. Deploy SQL Server:
  • Use the Management Portal to provision a new VM from a stock image in the image library.
  • Configure SQL Server on the VM. For more information, go to Install SQL Server Using SysPrep.
  • Join the VM to the newly created Active Directory domain.
  1. Deploy a multiserver SharePoint farm:
  1. Manage the SharePoint farm through System Center:
  • Use the Operations Manager agent and new Windows Azure Integration Pack to connect your on-premises System Center to Windows Azure Virtual Machines.
  • Use on-premises App Controller and Orchestrator for management functions.

 

Conclusion

Cloud computing is transforming the way IT serves organizations. This is because cloud computing can harness a new class of benefits, including dramatically decreased cost coupled with increased IT focus, agility, and flexibility. Windows Azure is leading the way in cloud computing by delivering easy, open, flexible, and powerful virtual infrastructure. Windows Azure Virtual Machines mitigate the need for hardware, so organizations can reduce cost and complexity by building infrastructure at scale—with full control and streamlined management.

Windows Azure Virtual Machines provide a full continuum of SharePoint deployments. It is fully supported and tested to provide an optimal experience with other Microsoft applications. As such, organizations can easily set up and deploy SharePoint Server within Windows Azure, either to provision infrastructure for a new SharePoint deployment or to expand an existing one. As business workloads grow, organizations can rapidly expand their SharePoint infrastructure. Likewise, if workload needs decline, organizations can contract resources on demand, paying only for what they use. Windows Azure Virtual Machines deliver an exceptional infrastructure for a wide range of business requirements, as shown in the four SharePoint-based scenarios discussed in this paper.

Successful deployment of SharePoint Server on Windows Azure Virtual Machines requires solid planning, especially considering the range of critical farm architecture and deployment options. The insights and best practices outlined in this paper can help to guide decisions for implementing an informed SharePoint deployment.

Additional Resources

Retailers’ Mobile and Social Commerce Strategies Will Yield Minimal Revenue

Predicts 2013: Retailers’ Mobile and Social Commerce Strategies Will Yield Minimal Revenue

30 November 2012 ID:G00231876

Analyst(s): Miriam Burt, Gale Daikoku, John Davison, Robert Hetu

VIEW SUMMARY

Tier 1 multichannel retailers will not gain significant benefits from mobile and social commerce strategies if they fail to understand how mobile and social customer interaction points can enhance and optimize cross-channel, customer shopping processes.


VIEW SUMMARY

Overview

Key Findings

  • Retailers will struggle to move significant numbers of consumers from cash and cards to Near Field Communication (NFC)-based mobile payments.
  • Retailers’ efforts to pursue location-based personalization offers will yield a very small rate of redemption.
  • Retailers will face a new threat to their profit margins and revenue-sharing models from emerging social shopping retailers.

Recommendations

For CIOs:

  • Ensure that you provide your customers with functionality on their mobile phones that they prefer to use, such as finding a store location or looking up stock availability, rather than NFC-based mobile payments.
  • Invest in multichannel analytical resources to help you increase revenue by defining more relevant coupon offers for all customers, while delivering personalized offers to a carefully chosen selection of customers.
  • Map your existing product catalog to the key demographics for social media to take advantage of the current low cost of entry to use the Facebook commerce (F-commerce) as a testing ground for social selling.

Analysis

What You Need to Know

Tier 1 multichannel retailers are still struggling to provide the everyday “business as usual” multichannel experience that their customers desire. Organizational and technology silos continue to hamper the delivery of a consistent and contiguous cross-channel customer shopping experience. Moreover, this is exacerbated by retailers investing in hyped-up mobile and social commerce solutions, rather than focusing on delivering the customer basics. For example, in store, some of the key customer basics are stock availability, an informed and available staff, and fast check-out.

Gartner predicts that retailers will struggle to move significant numbers of consumers from cash and cards, especially if they implement market-hyped, NFC-based, mobile wallet payment solutions. Our research confirms that customers’ preferences to use their mobile phones to find a store location, compare prices, look up stock availability and receive promotions are far ahead of their preferences to use their mobile phones to order and pay.

We predict retailers’ efforts to pursue context-aware personalized offers, such as location-based offers through mobile phones, will yield a very small rate of redemption. Our research shows that consumers favor paper coupons. In the near term, paper coupons will remain the dominant form of retail offer over electronic coupon redemption.

We predict that retailers will face a new threat to their profit margins and revenue-sharing models from emerging social shopping retailers. Our research shows that, with 93% share in the U.S., Facebook could become a virtual retailer by connecting manufacturers and distributors of consumer goods directly to the consumer (see Note 1). This shift could dramatically alter the profit margin and revenue-sharing models within the retailer and supplier networks, making it even more difficult for retailers to remain competitive.

Return to Top

Strategic Planning Assumptions

Strategic Planning Assumption: By 2014, less than 2% of consumers globally will adopt NFC-based mobile payments.

Analysis by: Miriam Burt and John Davison

Key Findings:

A Gartner consumer survey in 3Q11 in 10 countries showed that, on average:

  • Almost two-thirds of the consumers (62%) surveyed indicated that they did not use their mobile phones to conduct any type of financial transaction using mobile payment services.
  • Sixty percent of consumers indicated that concerns about the security of personal and payment data were the biggest barriers to using their mobile phones to make mobile payments. This is a 7% increase from the equivalent survey in 2010 (53%).
  • Seventy-nine percent of consumers indicated that the store is the main channel through which they were willing to make a purchase when conducting a cross-channel shopping event.

Market Implications:

This topic has been very hot in the past 12 months in all the major Tier 1 retail markets, with tremendous hype and publicity regarding solutions from a multitude of vendors of hardware, software, card payment services and, particularly, the NFC-based solutions, including those from Orange and Barclaycard, as well as the NFC-based mobile wallets from Isis and Google. Non-NFC-based solutions, such as the Starbucks mobile phone payment solution via its stored-value loyalty card and mobile bar codes, and PayPal’s mobile payment solution for Home Depot stores, have also been in the headlines.

About one-third of customers are using their mobile phones for financial transactions, although these are largely confined to functionality such as topping up prepaid mobile plans or purchasing digital products, not physical products. Moreover, this pertains to consumer usage of all types of mobile phones, not just to the usage of NFC-based mobile phones.

As part of a cross-channel shopping process, NFC-based mobile payments could address the need for speed of throughput and convenience during check-out in a store. This is important in some retail segments (such as grocery and convenience stores), but less important in other segments (such as luxury fashion).

If payment transaction fees using mobile devices are lower than traditional credit and debit cards, then there are clear savings for the retailer. However, in a 3Q11 retailer survey, Tier 1 retailers indicated that they expect the mobile channel, on average, to generate just under 2% of revenue through 2016, compared to 85% through the store and 12% through e-commerce. Hence, they do not see a robust business case for upgrading point of sale (POS) terminals to accept NFC-based, mobile contactless payments that include factors such as the cost of NFC-based POS terminal readers and the cost of merchant interchange fees.

Moreover, the speed of adoption of mobile payments will be dictated by consumers, so NFC-based payment solutions must demonstrate how they can support a secure, hassle-free, convenient and fast check-out — the latter being a key in-store customer service basic.

Current retailer trials of NFC-based stickers for promotions; the growing use of mobile coupons; the increasing use of mobile bar codes at the POS; and contactless payments using prepaid services for transportation applications, such as ticketing; may speed up the general adoption of NFC technology for mobile devices. For the most part, these are currently done through cards that customers touch on contactless readers, and do not involve NFC-enabled mobile phones in the payment process.

Benefits from NFC-based mobile payment transactions will only be gained if consumers are convinced that NFC mobile payments are secure, convenient and fast. They also need to have a compelling reason to make the switch. For example, retailers could give them incentives to choose this type of payment over others, such as tying loyalty programs into NFC-based mobile payments. In addition, a single set of standards needs to be agreed on by the banks, payment processing companies and retailers for NFC payments to succeed. We have not yet seen this consistency emerge yet.

Recommendations:

For CIOs:

  • Don’t let the projected rate of smartphone adoption or the hype around NFC-based contactless mobile payments drive investments in this solution.
  • Investigate how NFC can be used for nonpayment processes. For example, customers can use NFC stickers to access promotions or as a replacement for quick response (QR) code scanning.
  • Where appropriate, invest in secure, mobile POS applications in stores to enable store associates to provide the key customer basics of a fast and hassle-free check-out experience.
  • Ensure that you provide customers with functionality on their mobile phones that they prefer to use, rather than NFC-based mobile payments. This should include the capability to use their mobile phones to find a store location, compare prices, look up stock availability and receive promotions.
  • Trial mobile payments through lower-risk, stored-value payment solutions, preferably in conjunction with a loyalty solution.

Related Research:

“Hype Cycle for Retail Technologies, 2012”

“Distinguish How Consumers Want to Shop on Their Mobile Devices for Best Investment Decisions”

Strategic Planning Assumption: By 2015, less than 1% of all redeemed coupons will be location-aware offers sent by Tier 1 retailers.

Analysis by: Gale Daikoku and Robert Hetu

Key Findings:

  • Many Tier 1 retailers are pursuing personalization strategies that include the delivery of real-time offers on a customer’s mobile device while they are shopping in stores.
  • Paper coupons, including those provided directly to the customer, are the dominant format preferred by consumers.

Market Implications:

Retailers see personalization as a competitive necessity for building meaningful relationships that foster loyalty, yet many have work to do to support this level of engagement with customers. Gartner notes that nearly every Tier 1 retailer we speak with has prioritized the ability to gain a single view of its customers as a business priority, as personalization is dependent on good knowledge and segmentation of cross-channel customers. In fact, in the next few years, we expect that just over two-thirds (70%) of leading Tier 1 retailers will have improved the quality or way they develop customer offers. This can be anything from promotions to coupons or personalized offers. However, many retailers will be challenged by their abilities to deliver and execute location-aware offers to customers who are shopping in their stores.

Coupons are the most common form of retail offers. According to industry sources, even though coupon distribution is down overall due to more limited funding by manufacturers, paper coupons, many of which are delivered via free standing inserts (FSIs) that are mailed to a customer’s home, are expected to remain the dominant form of retail offer for some time. Gartner research shows that the vast majority of consumers prefer to use some form of paper coupon — in particular, those sent directly to the customer, delivered as a separate sheet with a sales receipt or from the retailer’s in-store mailer.

Electronic coupon distribution is an alternate, cost-efficient way to reach customers with offers. There are two types of e-coupons that are rated fairly high for potential use: those emailed to customers that can be printed out on paper and coupons that can be saved to loyalty accounts. However, customers are still getting used to these newer forms of couponing, and redemption will remain constrained in the near term, due to process and technology challenges in stores.

As mobile technology improves, customers will get more comfortable with using mobile devices as part of the shopping process. However, for retailers, communicating the perfect multichannel offer at the right moment in the shopping process, although theoretically attractive, is difficult to execute in real time (for example, sending location-based, context-aware, personalized mobile coupons in real time while customers are shopping in their stores).

Apart from customer readiness, there are customer privacy challenges around sending mobile coupons to customers’ personal mobile devices. In Gartner’s consumer survey, 39% of respondents said they had smartphones. However, when asked if they were willing to use a mobile app to receive coupons or rewards in stores, almost one-third (29%) indicated that that were not willing to use a smartphone app. Furthermore, 40% said that they were not willing to register their phones so that they can be tracked to receive offers while they were shopping in the store.

Recommendations:

For CIOs:

  • Maintain the capability and processes that support the delivery and redemption of paper coupons in stores.
  • Invest in the multichannel analytical resources to help you target a small, but lucrative, set of customers who are favorable to receiving personalized mobile offers.

Related Research:

“Hype Cycle for Retail Technologies, 2012”

“Consumer Survey Shows What’s Ahead for Retail Coupon Management”

“Personalization and Context-Aware Technology’s Impact on Multichannel Customer Loyalty”

“Marketing Service Provider Capabilities in Retail”

Strategic Planning Assumption: By 2015, a new social shopping retailer will emerge, accounting for 2% of U.S. Tier 1 retail sales.

Analysis by: Robert Hetu

Key Findings:

  • U.S. consumers in the 18- to 24-year-old demographic have the highest preference for using social media while shopping (48%), with expected continued growth in this activity as they grow older.
  • Facebook’s overwhelming share (93%) of social media provides it with an opportunity to generate a direct retail revenue stream via a virtual store approach.

Market Implications:

According to a Gartner consumer survey conducted in 3Q11, young adults aged 18- to 24-years-old conduct the highest levels of shopping-related social activity, including the propensity to look for special offers and check product reviews on a social networking site. The depth of personal information willingly supplied on social networks provides unmatched visibility into the lives, interests and personal networks of consumers.

Retailers primarily view the customer through their interactions via shopping activity. As they seek to expand their knowledgebases, they have been experimenting with social media through various partnerships. Most Tier 1 retailers have built a relationship with Facebook as a convenient avenue to access social networks via Facebook commerce (or F-commerce), but have found little revenue.

As experienced by retailers when they were part of the Amazon platform, the extent of information flowing between partner organizations (for example, sales and inventory data at a SKU level, as well as customer transaction data) can be used by a partner as it builds its own retail strategy. Complicating this further is that the social environment is dominated by a single player that has 93% share in the U.S. As a result, the partnership with Facebook, or with other social players that currently exist or may rise as rivals of Facebook, could provide the competitive content required to quickly enable a virtual social retailer, connecting manufacturers and distributors of consumer goods directly to the consumer. This shift could dramatically alter the profit margin and revenue-sharing models within the retailer and supplier networks, making it even more difficult for retailers to remain competitive.

Amazon grew to be a top U.S. retailer by taking advantage of a specific weakness in the retail environment — namely, the slow adoption of technology-enabled shopping. It is continuing to exploit this weakness with its forays into mobile shopping and social enablement. Amazon now includes the setup of a profile with personal information, including a picture and preferences, with the ability to share previously purchased items with others to help facilitate social shopping. Customers can also create lists of products and purchase guides to support various activities (for example, photography). As a result, Amazon is far ahead of multichannel, Tier 1 retailers in its ability to meet this new threat posed by social networks, such as Facebook.

Recommendations:

For CIOs:

  • Map your existing product catalog to the key demographics for social media to take advantage of the low cost of entry to use the F-commerce platform as a testing ground for social selling.
  • Connect e-commerce and m-commerce sites with Facebook using social plug-ins and custom applications to ensure a consistent flow for the cross-channel customer.
  • Approach F-commerce with the awareness that Facebook will use this as a revenue-generating model by planning alternative approaches for social commerce when it becomes disadvantageous to engage in F-commerce.

Related Research:

“Use Facebook to Test Social Commerce Strategy”

“Information Innovation Powers Customer-Centric Merchandising”

“Hype Cycle for Retail Technologies, 2012”

Return to Top

A Look Back

In response to your requests, we are taking a look back at some key predictions from previous years. We have intentionally selected predictions from opposite ends of the scale — one where we were wholly or largely on target, as well as one we missed.

On Target: 2012 Prediction — By 2013, the U.K. retail market will be the world’s most advanced multichannel market.

Analysis by: Miriam Burt, Van L. Baker, Gale Daikoku, John Davison, Robert Hetu

Previously Published: “Predicts 2012: Retailers Turn to Personalized Offers Through Mobile and Social but Will Struggle With Multichannel Execution”

In 2Q11 and 3Q11, Gartner surveyed leading retailers in the U.S., Canada, the U.K., France, Germany, Brazil, Russia, India, China and Japan. The survey asked retailers to estimate the percentage of revenue coming from each of their selling channels. In this survey, U.K. retailer estimates for the percentage of e-commerce sales were higher than in any other country surveyed, with 13.22% of sales compared to the 10-country average of 9.22% of sales for this channel. The same finding was made in other channels — mail order and catalog, call center, and mobile — where the U.K. had a higher estimated percentage than the 10-country average.

The corollary of this is that the U.K. retailers surveyed anticipated a smaller percentage of their revenue through 2014 coming from brick-and-mortar stores (79.28%), compared to the 10-country average of anticipated revenue in 2014 (88.56%). Thus, the U.K. is on target to be the world’s most advanced multichannel market through 2013 and beyond.

Missed: 2012 Prediction — By year-end 2012, 90% of Tier 1 and Tier 2 retailers will have an active presence on social media sites.

Analysis by: Miriam Burt, Van L. Baker, Gale Daikoku, John Davison, Robert Hetu

Previously Published: “Predicts 2012: Retailers Turn to Personalized Offers Through Mobile and Social but Will Struggle With Multichannel Execution”

In defining active presence, the intent was to incorporate some commerce activity within the social networking activity. Facebook commerce was, at the time, being pursued by many Tier 1 retailers. Over time, they learned that selling products on social media was not a straightforward process and retreated. Presently, most Tier 1 retailers have a presence on social media. However, they have not successfully monetized through F-commerce or other sales activities. Many even failed to be responsive to customer comments, making it a one-way communication vehicle.

Return to Top

SharePoint Lifecycle Management Solution with Project Server 2010 – Setup Guide


 

 

SharePoint Lifecycle Management Solution with Project Server 2010 – Setup Guide

 

This document is provided “as-is”. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it.

Some examples depicted herein are provided for illustration only and are fictitious.  No real association or connection is intended or should be inferred.

This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.

 

© 2011 Microsoft Corporation.

 

Whitepaper: SharePoint Lifecycle Management Solution with Project Server 2010 – Setup Guide

Authors:    Scott Jamison and Mark Candelora, Jornata LLC; Christophe Fiessinger, Senior Technical Product Manager, Microsoft

Published: May 2011

Applies to: Microsoft SharePoint Server 2010 and Microsoft Project Server 2010

Summary: This white paper provides instructions for configuring the Microsoft SharePoint
Server 2010 lifecycle management solution that runs atop Microsoft Project Server 2010. (37 printed pages)

 

Contents

Contents    3

About this white paper    5

Option 1: Step-by-Step Configuring your SharePoint Lifecycle Management Project Application    6

Create custom fields    6

Lookup tables    6

Create Custom Fields    7

Business drivers    8

Create Drivers    8

Prioritize Drivers    9

Create Project Detail Pages (PDP)    10

Create workflow stages    10

Install & configure the Workflow Visualization Web Part    12

Install the web part    12

Configure the web part    13

Install & configure Dynamic Workflow WSP    14

Install the Dynamic Workflow solution    14

Configure the Dynamic Workflow solution    14

Create generic resources    19

Install Project site template    19

Install Custom Site Template    20

Create EPT    20

Option 2: Restoring From Database Backup Files    21

Restore & Attach Databases    21

Attach Content DB    22

Deploy Dynamic Workflow and Workflow Visualization solutions    22

Provision a new Project Web Application    24

PWA Settings    25

Installing the Solution in the Information Worker Demonstration and Evaluation Virtual Machine    26

Using the Solution    28

Creating a Proposal    28

Gathering Information    28

Getting Approval    31

Project Selection    31

Project Execution    34

Post Mortem    34

Where to go from here    35

Business demonstration    35

Adapt for your organization    35

Making it real    35

Learn about Microsoft Project Server 2010    35

About the Authors    36

Artifacts    36

List of Figures    37

References    38

 

 

About this white paper

Microsoft SharePoint Server 2010 provides a vast number of capabilities that empower both business users and IT to create solutions quickly. For this reason, many organizations consider implementing SharePoint as a central platform for a wide array of business solutions.

 

For those organizations that have put SharePoint in place to handle the wide array of business needs (or are planning to do so), it’s likely that they’ll need a good way to track, manage, and prioritize those business requests. The white paper entitled SharePoint Lifecycle Management Solution with Project Server 2010 provides a suggested solution based on Microsoft Project Server 2010. The white paper is available for download at http://go.microsoft.com/fwlink/?LinkID=218030.

 

This white paper (the one you reading) is an adjunct paper that will walk through the steps necessary to begin using Microsoft Project Server 2010 to help prioritize and schedule the SharePoint projects that have been requested by a business user or group. This document assumes you will be working in a farm that has a working installation of SharePoint 2010 with Microsoft Project Server 2010 fully configured (please refer to Project Server 2010 Tech Center: http://technet.microsoft.com/en-us/projectserver/ee263909).

 

There are two ways to begin using this Solution:

  1. A manual, step-by-step creation of each Microsoft Project Server 2010 PWA instance, or
  2. By restoring the database backups provided in the download.

 

With the step-by-step process outlined in the section entitled Option 1, you will learn about what each entity does and have the opportunity to customize them as they are created. In the section entitled Option 2, you’ll instead use database backups, which enables you to get started right away. The backups also come pre-populated with sample projects that can be used to view portfolios or create reports. It is not recommended to use the backups as anything other than a demo that can be used to learn and experiment with or to present to others. To implement a solution in a production environment, the entities should be created and finalized in a development or test environment, thoroughly tested, then moved to production using the DM Import/Export tools (found in the Microsoft Project 2010 Solution Starters download).

 

This solution requires artifacts from a separate download. Before starting this instruction guide, please download the required artifacts from http://go.microsoft.com/fwlink/?LinkID=218030 and the deployment download from http://archive.msdn.microsoft.com/P2010SolutionStarter. Please see the Artifacts section in this document for specific details on the files.

 

 

Option 1: Step-by-Step Configuring your SharePoint Lifecycle Management Project Application

Once your Project Server Web Application (PWA) has been properly configured, open your browser to the PWA home page (by default this is http://<server>/pwa).

Create custom fields

Custom fields provide the ability to collect information on different project related objects as required by the business.

  1. In the quick links, under Settings, select Server Settings. Under Enterprise Data, select Enterprise Custom Fields and Lookup Tables.

Lookup tables

Lookup tables provide a specific list of values for fields that require a limited number of options. These    lookup tables will be the options for the fields we create.

  1. Scroll to the second section called Lookup Tables for Custom Fields and click New Lookup Table
  2. Create the following tables with the specified values.

Roles

Name: Role_LT

Type: Text

Code
Mask: leave default

Lookup Table:

Level

Value

1

Developer

1

Administrator

1

QA Engineer

1

Architect

1

DBA

1

Business Liaison

1

Analyst

Display order: By row number

Project Types

Name: ProjectType_LT

Type: Text

Code
Mask: leave default

Lookup Table:

Level

Value

1

Feature activation

1

Search content source

1

Business workflow process

1

Executive BI dashboard

1

Business entity (BCS) connection

1

SharePoint custom software development

1

Other

Display order: By row number

Risk Levels

Name: Risk_LT

Type: Text

Code
Mask: leave default

Lookup Table:

Level

Value

1

Very High Risk

1

High Risk

1

Medium Risk

1

Low Risk

Display order: By row number

 

 

One To Ten

Name: OneToTen_LT

Type: Number

Lookup Table:

Value

Description

1

The project went terribly – possibly due to budgeted time, resources or other estimates were incorrect; changing scope, technology or implementation issues, etc.

2

 

3

 

4

 

5

The project was ok – there were some aspects of the project that went smoothly, but other aspects were not.

6

 

7

 

8

 

9

 

10

The project was a complete success – all aspects of the project went according to plan, the project finished on time and under budget.

Display order: By row number

Create Custom Fields

Project Custom Fields

    Project custom fields are used to collect information on projects for reporting or workflow purposes. For more information on how to manage Enterprise Custom Fields and Lookup Tables (Project Server 2010) please go to http://technet.microsoft.com/en-us/library/gg709725.aspx .

  • Under the section called Enterprise Custom Fields, click New Field

    Create the following fields with the specified values. Attributes not listed in the table below can be left at     their default values.

 

Name

Description

Entity

Type

Custom Attributes

Behavior

Estimated Cost

How much it will cost for additional resources i.e. additional hardware, software, licenses, etc.

Project

Cost

 

Controlled by workflow

Estimated Effort

An estimate of FTE days to complete this project, including all planning, execution, testing, etc…

Project

Duration

 

Controlled by workflow

Additional Technical Description

Any additional technical information to take note of during planning stages of this project.

Project

Text

 

Controlled by workflow

Risk

How likely it is that this project will remain within it’s time/scope/budget estimates

Project

Text

Lookup Table

-Risk_LT

Controlled by workflow

SharePoint Project Type

The type of project this is

Project

text

Lookup Table

-ProjectType_LT

Controlled by workflow

Post Mortem Notes

Notes gathered from the post mortem discussions. What did we do well? What could have been improved? Any changes to documentation / forms / process?

Project

Text

 

Controlled by workflow

Overall Project Health Rating

1 – 10 rating of the project’s planning & execution.

Project

Number

Lookup Table

-OneToTen_LT

Controlled by workflow

 

    Resource custom fields

    Resource custom fields are used to add information about resources for reporting or workflow purposes.

  • Under the section called Enterprise Custom Fields, click New Field.

 

    Create the following fields with the specified values. Attributes not listed in the table below can be left at     their default values.

Name

Description

Entity

Type

Custom Attributes

Behavior

Role

 

Resource

Text

Lookup Table

-Role_LT

Required

 

Business drivers

Business drivers are key reasons why your business implements software. Business drivers are used to assess project strategic value and to assure that project selection supports the organizational strategy. These drivers are priorities that allow the business to be more effective. For more information please refer to Portfolio Analysis with Microsoft Project Server 2010 (white paper)

 

Create Drivers

  • From your PWA home page, under Strategy, select Driver Library from the quick links menu.
  1. Click the New button in the ribbon

Create the following drivers with the specified values. Attributes not listed in the table below can be left at their default values.

 

Name

Description

Project Impact Statements

Cost Avoidance

Is there clear cost avoidance as a result of implementing this solution?

**Refer to Project Impact Statement table below**

Employee Retention

Note: this driver comes with the base Project Server installation

How will our employee retention improve as a result of implementing this solution?

**Refer to Project Impact Statement table below**

Productivity Gains

Improvements in a group or department’s effectiveness that would allow them to get work done faster, or with better quality.

**Refer to Project Impact Statement table below**

Quality Improvements

Will we attain a measurable quality improvement as a result of implementing this solution?

**Refer to Project Impact Statement table below**

Risk

Will we reduce risk as a result of implementing this solution?

**Refer to Project Impact Statement table below**

ROI

Will we obtain a clear return on investment (ROI) as a result of implementing this solution?

**Refer to Project Impact Statement table below**

Time-to-Market

How much faster can we get a product to market as a result of implementing this solution?

**Refer to Project Impact Statement table below**

 

Use these Project Impact Statements for each of the above drivers. Be sure to replace “<driver>” with the name of the driver.

Project Impact Level

Description

None

No measurable impact.

Low

Little impact on <driver> – potentially 0-3% increase in <driver>.

Moderate

Some impact on <driver> – potentially 3-10% increase in <driver>.

Strong

Strong impact on <driver> – potentially 10-20% increase in <driver>.

Extreme

Very large impact on <driver> – potentially over 20% increase in <driver>.

 

Prioritize Drivers

Driver prioritization will help automatically rank projects based on their strategic rating. Your project portfolio will be sorted based on the priorities you set here.

  1. Select Driver Prioritization in the quick links menu. Click New in the ribbon.
  2. Enter a Name and Description for your driver prioritization.
  3. Select your prioritization type – Calculated or Manual.
    1. Calculated – you’ll walk through a wizard that will compare the priority of each driver against all the others.
    2. Manual – you will manually enter the percentage value of each priority. If the amounts do not add up to 100% when save is clicked the numbers will be adjusted to equal 100%
  4. Select all of your drivers then click Next: Prioritize Drivers.
  5. After walking through the wizard or entering percentages, review your priority levels and click Save.

    *Note – If using the Calculated Wizard you cannot edit the Priority Percentages

Create Project Detail Pages (PDP)

Project detail pages collect information from the user at specific points in the project lifecycle. Each stage must have a project detail page associated. Please refer to Workflow and Project Detail Pages for more information.

  • In the quick links, select Server Settings. Under the Workflow and Project Detail Pages section, select Project Detail Pages.
  • In the Documents tab in the ribbon, click New Document
  • Always select “Full Page, Vertical” as Layout Template.

Create the following project detail pages with the specified values. Attributes not listed in the table below can be left at their default values.

Name

Web Part

WP Configuration

Initial SharePoint Proposal

Project Fields

Fields:

Project Name
Description

Initial SharePoint Requirements

Project Fields

Fields:

Estimated Cost

Estimated Effort

Additional Technical Description

Risk

Project Type

Initial SharePoint Requirements Strategic Impact

Project Strategic Impact

 

SharePoint Schedule

Project Details

 

SharePoint Post Mortem

Project Fields

Fields:

Post Mortem Notes

Overall Project Health Rating

 

Create workflow stages

Stages represent specific points in a project lifecycle that a project must move through in order to complete. Each stage has specific requirements that must be met before a project can move on to the next stage.

  1. From the Server Settings page, select Workflow Stages under Workflow and Project Detail Pages.
  2. Click New Workflow Stage.

Create the following project detail pages with the specified values. Attributes not listed in the table below can be left at their default values.

Name

Description

Description for Submit

Workflow Phase

Visible Project Detail Pages

Required Custom Fields

Strategic Impact Behavior

Project CheckIn Required

01 – SP Initial Proposal

A new proposal was submitted.

Upon submission, this proposal will be forwarded to an administrator who will gather information on requirements, business drivers, and usage to create estimates for budget, timeline, resources, etc.

Create

– Initial SharePoint Proposal

 

Read Only

No

02 – SP Initial Requirements

SharePoint Project Administrator will gather the project details to create a baseline estimate for the proposal.

Upon submission, this proposal will be slated for review.

Create

– Initial SharePoint

Requirements

– Initial SharePoint

Requirements

Strategic Impact

– SharePoint Schedule

– Estimated Cost

– Estimated

Effort

– Risk

– SharePoint

Project Type

Required

Yes

03 – SP Project Selection

The proposal is being reviewed for selection. If selected, it will move into the planning stages where resources will be assigned and task estimates created.

Upon submission the proposal will become a project where baseline estimates will become real tasks for resources to be assigned.

Select

– Initial SharePoint

Requirements

– Initial SharePoint

Requirements

Strategic Impact

– SharePoint Schedule

 

Read Write

No

04 – SP Execution

Work is being done on this project, see the project plan for more specifics on the progress of the project.

Upon submitted, this project will move into the post-mortem stage where the project execution is reviewed and potential improvements identified.

Manage

– SharePoint Schedule

 

Read Only

Yes

05 – SP Post Mortem Assessment

   

Finished

– SharePoint Post

Mortem

– Initial SharePoint

Requirements

– Initial SharePoint

Requirements

Strategic Impact

– SharePoint Schedule

– Overall Project

Health Rating

Read Only

No

Install & configure the Workflow Visualization Web Part

The Workflow Visualization Web Part (available as part of the Microsoft Project 2010 Solution Starters) provides a graphical display of the project workflow highlighting the current stage that the proposal is currently in. Deploying this Web Part is optional and is not required to implement this solution, it helps visualize workflow execution.

Install the web part

  1. Under the MS Project Solution Starters Deployment download, open the Workflow Visualization Webpart folder.

Figure 1 – Deploying the Dynamic Workflow solution

  1. Open and read the ReadMe.docx then edit the DeployPowerShell.cmd script. Edit the SiteUrl parameter so it equals the URL of your new PWA web app.
  2. Deploy the solution by running the DeployPowerShell.cmd script.

Configure the web part

  1. In the PWA site collection, go to Site Actions -> Site Settings, then under Site Collection Administration, select Site collection features
  2. Find and activate the Workflow Visualization feature.
  3. Go to Server Settings -> Workflow Stages. Open each workflow stage and expand the System Identification Data section at the bottom. Write down the name of the stage with the GUID that appears for each stage.
  4. Go to Site Actions -> More Options... In the Create dialog filter by Library on the left, then choose Asset Library in the middle section then click More Options. Enter StagesLibrary for the name, then check No for both Navigation and Item Version History, then click Create at the bottom of the window.
  5. In your newly created list, in the Documents tab of the ribbon go to Upload Document -> Upload Multiple Documents
  6. In the file system, find the WF Images folder, then drag and drop all of the images into the upload window.
  7. Once the files have finished uploading, click the Create Column option in the Library tab of the ribbon.
  8. In the Create Column dialog, enter StageUID for the name, leave all other options as blank then click OK.
  9. Under the Library tab in the ribbon, select Datasheet View. For each image enter the GUID into the StageUID field for each stage retrieved from Step 1. The arrow image will have a blank StageUID.
  10. Once this is complete, go back to Server Settings -> Project Detail Pages. Open the ProposalStageStatus page, then click Site Actions -> Edit Page.
  11. In the Full Page zone, click Add a Web Part. In the section that opens, select Project Server Add-Ons for the category. The Workflow Visualization web part will be the only option displayed. Click the add button to add the web part to the page.
  12. In the upper-left corner of the Workflow Visualization web part, click the down arrow, then click Edit Web Part. Enter the following information into the properties:

Property

Value

Document Library Name

StagesLibrary

Image for the arrow

Arrow.png

Arrow Image Width

70

Stage Image Width

120

Stage image Padding

0

Rich Highlighting

Checked

 

  1. Click OK to close the properties dialog, and then click Stop Editing in the Page ribbon.

Install & configure Dynamic Workflow WSP

The Microsoft Project Server 2010 Dynamic Workflow (available as part of the Microsoft Project 2010 Solution Starters) will guide your project through the required lifecycle stages, prompting for approvals and portfolio commitments along the way. The Dynamic Workflow solution starter provides an easy mechanism to build sequential workflows in PWA without using Visual Studio. The tool inserts a custom InfoPath form into the workflow definition process that gathers the custom workflow information. That information is then used to dynamically construct a workflow for the demand management process.

Install the Dynamic Workflow solution

  1. Under the MS Project Solution Starters Deployment download, unzip the dynamic workflow component.
  2. Open and read the ReadMe.docx then edit the DeployPowerShell.cmd script. Edit the SiteUrl parameter so it equals the URL of your new PWA web app.
  3. Deploy the solution by running the DeployPowerShell.cmd script.
  4. In the PWA site collection, go to Site Actions -> Site Settings, then under Site Collection Administration, select Site collection features
  5. Find and activate the DemandManagement DynamicWorkflow feature

Configure the Dynamic Workflow solution

  1. Go back to the Site Settings page and under Site Administration, click Workflow settings
  2. Click Add a workflow and select DM DynamicWorkflow. Name it SharePoint Lifecycle Management, and leave the other defaults on the form.
  3. In each section of the form, fill out the details as shown below then click the Add Stage button at the top of the section to add a new stage.

Be sure not to click the Submit button until all stages have been completed.

 


Figure 2 – Stage 1: Initial Proposal

Figure 3 – Stage 2: Initial Requirements

Figure 4 – Stage 2: Approval

Figure 5 – Stage 3: Project Selection

Figure 6 – Stage 3: Approval

Figure 7 – Stage 4: Execution

Figure 8 – Stage 5: Post Mortem Assessment

The image below shows how the finished workflow configuration should look.

Figure 9 – Dynamic Workflow Stage Definition

  1. Once the workflow has been completely configured, click Submit.
  2. Go to Server Settings -> Change or Restart Workflows and enter the following information into the form:

Field

Value

Choose Enterprise Project Type

SharePoint Enhancement

Choose Projects

<all>

Choose new Enterprise Project Type or restart current workflow

Associate projects with a new Enterprise Project Type: SharePoint Enhancement

Choose Workflow Stage

Skip until the current workflow stage

 

  1. Once the form has been filled out, click OK at the bottom of the page.

Create generic resources

Generic resources allow you to create estimates and assign resources by skill in order to get baseline cost and timeline estimates without having to assign an actual team. Please refer to http://office.microsoft.com/en-us/project-server-help/create-resources-to-represent-capacity-HA101865477.aspx for more information.

  1. In the quick links menu, under Resources, click the Resource Center link.
  2. In the ribbon on the Resource Center page, click New
    Resource:

 

Attributes not listed in the table below can be left at their default values.

 

Display Name

Generic

Role

SharePoint Developer

Yes

Developer

SharePoint Administrator

Yes

Administrator

SharePoint Analyst

Yes

Analyst

SharePoint Architect

Yes

Architect

Business Liaison

Yes

Business Liaison

QA Engineer

Yes

QA Engineer

 

 

Install Project site template

A Project site template provides a starting point for your projects. The template can contain predefined tasks, milestones, resource allocations and other items that can then be changed or customized for each project.

  1. Open MS Project Professional. Connect to your SharePoint Lifecycle Management PWA application.
  2. From MS Project Professional, open the project template from the SharePoint Lifecycle Solution Accelerator download.
  3. Under the Resource tab in the ribbon, click Add Resources -> Build Team from Enterprise…
  4. In the window that opens, select each of the generic resources that were created in the above step and add them to the project. It may help to set the Group by drop down to Generic then collapse the No group.
  5. Once all generic resources have been added, click the OK button to close the dialog.
  6. Add one or two resources for each task in the project template.
  7. Click File -> Save as… In the Save to Project Server dialog, type SharePoint Enhancement Template as the name. Select Template as the type. Leave the other defaults and select Save.
  8. In the Save as Template dialog box, select all the checkboxes and click Save.

Install Custom Site Template

The custom site template provides custom lists, pages, and other items that can provide a customized starting point for your project collaboration site.

  1. Open the Solution Gallery – go to Site Actions -> Site Settings, and under Galleries select Solutions.
  2. In the Solutions tab in the ribbon, select Upload Solution. Browse to the solution files, and find the SPEnhProj.wsp file under the Site Template folder. Click Activate when the popup appears.
  3. Open the Site collection features (go to Site Actions -> Site Settings then select Site collection features). Find and enable the feature Web Template feature of exported web template SharePoint Enhancement Site Template.

Create EPT

The EPT or Enterprise Project Type combines a workflow (consisting of stages & PDPs) to a project template and a custom site template. Each EPT can we used to model specific work request type in your PWA instance. For more information please refer to http://office.microsoft.com/en-us/project-server-help/enterprise-project-types-HA100955930.aspx

  1. Open the Project Server Settings page (from the PWA home page, select Server Settings). Under the Workflow and Project Detail Pages section, select Enterprise Project Types.
  2. Click New Enterprise Project type:

Property

Value

Name

SharePoint Enhancement

Description

A new enhancement for SharePoint 2010

Site Workflow
Association

SharePoint Lifecycle Management

Default

No

Departments

<blank>

Image

<blank>

Order

Position at end

Project
Plan
Template

SharePoint Enhancement Template

Project
Site
Template

SPEnhProj

 

Option 2: Restoring From Database Backup Files

This method will use a five database attach (Draft, Published, Reporting, Archive and Content databases) to restore the SharePoint Lifecycle demo server. The backup files come as backups of the 5 databases required for MS Project Server to function: Archive, Draft, Published, Reporting, and the SharePoint Content database. For more detailed instructions on the five databases attach restore method go see Database-attach full upgrade to Project Server 2010. Please note that the sample databases were created using the February 2011 Project Server 2010 Cumulative Update and hence you will need to apply this update to your test farm in order to perform the database attach procedure: http://support.microsoft.com/kb/2475879 .

 

Restore & Attach Databases

On your database server, using SQL Server Management Studio, restore each database backup file using the name of the file as the database name.


Attach Content DB

Attach the content database to the farm in central administration. For this restore option to work, the site collection restored from the content database must use the path http://<server>/PWA. If this path is not available, another web application must be selected to restore the content database to.

  1. Open the Central Administration console, and click Application Management. Under Databases click Manage content databases.
  2. In the Manage Content Databases page, click Add a content database.
  3. In the Add Content Database page, select the web application to add the content database to, then enter the SP_PWAContent for the Database Name.
  4. Leave the other settings with their defaults and click OK.

     


 

Deploy Dynamic Workflow and Workflow Visualization solutions

Deploy the workflow:

  1. Under the MS Project Solution Starters Deployment download, unzip the dynamic workflow component.
  2. Open and read the ReadMe.docx then edit the DeployPowerShell.cmd script. Edit the SiteUrl parameter so it equals the URL of your new PWA web app.

  1. Deploy the solution by running the DeployPowerShell.cmd script.

Deploy the workflow visualization web part:

  1. Under the MS Project Solution Starters Deployment download, open the Workflow Visualization Webpart folder.

Figure 13 – Deploy Workflow Visualization webpart

  1. Open and read the ReadMe.docx then edit the DeployPowerShell.cmd script. Edit the SiteUrl parameter so it equals the URL of your new PWA web app.
  2. Deploy the solution by running the DeployPowerShell.cmd script.

 

Provision a new Project Web Application    

  1. If you have not already done so, create a new Project Server Service Application under Application Management -> Service Applications-> Manage Service Applications
  2. In the Project Server Service Application Management page, click Project
    Server.
    1. **Note – the configuration requires the path http://<server>/pwa. If this site already exists, it must be removed before your new PWA can be attached to the service application.
  3. Click Create Project Web App Site
  4. Select the web application that your content database was restored to, then for each database name, enter the name of the corresponding database that was restored as part of the Restore & Attach Databases section. Be sure to leave the Project Web App path as its default PWA value since this is the path that the restored Site Collection is located.


PWA Settings

  1. Open your browser to your new PWA site. In the quick links bar, select Server Settings, then under Workflow and Project Detail Pages click Project Workflow Settings
  2. Change Workflow Proxy User account to the administrator account used to provision your PWA site and click Save.
  3. Go to Site Actions -> Site Settings and under Site Administration, select Workflow Settings.
  4. Click Add a workflow. Follow the steps detailed above in Configure the Workflow to set up the workflow.
  5. Once the workflow has been configured, go back to the PWA site and go to Server Settings -> Enterprise Project Types (under work flow and project detail pages). Select the SharePoint Enhancement EPT item.
  6. Under Site Workflow Association, select the newly created workflow and click Save.


 

Once you have all of the databases restored and the additional configuration changes made, you should see all of the projects in various stages and will be able to modify the settings to suit your organization’s needs.

 

Installing the Solution in the Information Worker Demonstration and Evaluation Virtual Machine

If you would like to evaluate this SharePoint Lifecycle Management Solution with Project Server 2010 in an existing demonstration environment, it can easily be achieved by leveraging the publically available demo virtual machine: 2010 Information Worker Demonstration and Evaluation Virtual Machine (RTM).

To deploy and configure this solution in the IW Demonstration & Evaluation VM do the following steps:

  1. Download, deploy and configure the IW Demonstration & Evaluation VM
  2. Download and apply at a minimum the February 2011 Project Server 2010 Cumulative Update: http://support.microsoft.com/kb/2475879  (since the PWA database schema has be patched with this latest CU)
  3. Restore all five databases within the VM’s SQL instance (you can mount the databases as an ISO file)
  4. Go to the Central Administration and add the Content database to the Intranet site

Figure 17 – Adding Content DB to IW Demo VM

  1. Deploy the Dynamic Workflow and Workflow Visualization WSP to http://intranet.contoso.com (see detailed steps by steps in earlier)
  2. Provision a new PWA instance to http://intranet.contoso.com (make sure you use the same database names you used when you restored them: ie. SPPWA_Published, etc.)

Figure 18 – Provision PWA instance in IW Demo VM

Figure 19 – PWA instance under Intranet.Contoso.Com

Additionally please check out the Microsoft Project 2010 Demonstration and Evaluation Installation Pack to further test out Microsoft Project and Project Server 2010 its capabilities and scenarios (contains hands-on-labs / demo scripts which showcases Microsoft Project 2010).

 

 

Using the Solution

Once the solution has been created, we can begin to create new project proposals.

Creating a Proposal

In your SharePoint Lifecycle PWA, go to your project center. In the projects ribbon, select New -> SharePoint Enhancement.


In the Initial Proposal form that displays, enter a name and description for the proposal. This information will assist the SharePoint Analyst in determining the requirements of the project and creating the initial scope.

Gathering Information

In this stage, the Business Analyst will gather information on the new project proposal, determine the technical requirements, create baseline estimates, and estimate the strategic impact of the project.

  1. Go to the Project Center. Find and click the new proposal.
  2. In the Proposal Stage Status page, you will see information regarding what needs to be done for the current stage. Click the Next button in the Project ribbon.
  3. The Initial Requirements form prompts the SharePoint Analyst to estimate the cost and effort for the project along with the risk associated with completing the project. The page also asks for additional technical information that might be required to complete the project and the SharePoint Project Type.


  1. Once the information has been entered, click Next in the Project ribbon. This will take you to the Scheduling page.
  2. A project template has been created with some baseline estimates. These estimates can be changed through the web interface or using MS Project Professional. In the Project tab in the ribbon, click Resource Plan.
  3. From the Plan tab, click Build Team. The Team Builder page will allow you to assign generic resources to your project to create your baseline estimates for the project. Select some generic resources for this project then click Save & Close.


  1. Once the resources have been selected, you will be able to create your resource plan. In the Plan ribbon, set Work Units to Full-time Equivalent, and Timescale to Months.
  2. Estimate the amount of FTE’s that will be needed for each resource type. Once that’s done click Save and then close the Resource Plan page.

After finalizing the estimates for your project, click the Next button in the Project tab in the ribbon.

  1. The strategic impact page will help evaluate the project proposal against the strategic goals of the company.


  1. After all strategic impact rankings have been made; click the Commit button in the Project ribbon. This will start the approval process for the project proposal.

Getting Approval

As a part of the Initial Requirements stage, executive approval is required for the project to continue. Once the project requirements, baseline estimates, and strategic impact have been submitted, the SharePoint governance team must meet to discuss the project and its merit along with any potential implications it may have.

  1. In the quick links bar, click the Workflow Approvals link.
  2. The approvals page will contain numerous approval tasks that represent pending approvals. Find the approval task for the new project and edit the item.


  1. Click the approve button in the upper-left corner. Once the project has been approved, it will move to the selection stage.

Project Selection

The project selection process will rank each project by their strategic value, then allow you to apply budget or resource constraints against them to determine what projects are feasible to implement.

  1. In the quick links bar, select Portfolio Analysis. Click New in the Analyses ribbon to create a new portfolio for analysis.
    1. Note that before doing this it is common to set up project dependencies to account for projects that may be mutually exclusive, or require other projects to be completed before another project to begin. To do this, select Project Dependencies from the Analyses tab.
  2. In the new analysis creation page, enter a name and description for your portfolio analysis. Next select the projects you wish to include. Also, be sure the Primary Cost Constraint is set to Estimated Cost.
  3. To include resource analysis in your project selection project check the Analyze time-phased project resource requirements checkbox. Be sure to set the following fields:
  • Resource role custom field – Role
  • Resource capacity impact – Committed and proposed assignments affect capacity
  1. Click Next: Prioritize Projects to see all of the projects’ strategic impact rankings. Here you can make modifications from what was originally entered in the Initial Requirements stage.


  1. Click Next: Review Priorities to review the relative priorities of all of the projects in your portfolio. No changes can be made in this screen.
  2. Click Next: Analyze Cost to see the estimated total cost of all projects in your portfolio. Enter a new budget amount under Cost Limits here to see which projects will be chosen based on your new budget constraint. Click Save As in the ribbon to save your current budget scenario.


  1. Click Next: Analyze Resources to see which projects can be selected based on resource constraints. Note that projects that were not selected due to budget constraints are not displayed in this part of the analysis. Under each project’s New Start column, select a new start date to stagger the start of your projects throughout the year. You can also choose to hire more resources to bring in more projects into your selected portfolio.
  2. Click the Requirements Details button in the Analysis ribbon to view the actual resource breakdown of how many hours are required for each type of resource for each project. In the top section, you can see how many FTE’s of each resource are available for allocation for that time period. Cells highlighted in red display a deficit in the required resources for that time period. Click Save As in the ribbon to save your current scenario.

 


  1. Once you have the projects that will be selected, the selected projects must be approved by the portfolio manager. In the quick links bar, select Workflow Approvals. Find the pending approval tasks for each of the selected projects to edit, then approve them.

Project Execution

This stage is where project work is executed. This includes actual task planning, detailed requirements gathering, developing test plans, as well as any development, testing, documentation, etc. Much of the task planning, and scheduling can be done by connecting MS Project Professional to your PWA instance. Project teams will have access to the team site for this project as well for collaboration on documentation, discussions, etc.

  • Once the execution phase is completed, open the Project Stage Status page and click Submit in the Project ribbon to move to the next stage.

 

Post Mortem

The Post Mortem stage is where we gather the “lessons learned”. It is common to hold a formal meeting with the project team and stakeholders to gather what went well and what did not.

 

  1. Open the Project Stage Status page and click the Next button in the Project ribbon.
  2. Enter a 1-10 rating of the overall project health (a simple measure of the project’s execution), then enter the notes and action items from the post mortem meeting.
  3. Click Submit in the Project ribbon to complete the project proposal.
  4. Check-in the project plan by clicking Close in the ribbon

 

Where to go from here

Once you’ve completed your evaluation using our sample data and solution accelerator, you can use the example solution to accomplish several additional tasks. For example, use the 5db backup with sample data to demonstrate the solution, update the configuration parameters to accommodate your particular organization’s particular business needs, or even work with a consulting partner to design a full solution based on these concepts; the following outlines what steps you might take.

Business demonstration

Using the backups with sample data provided gives a great starting point for a discussion around Demand Management and how this solution might be employed in your organization.

Adapt for your organization

Since the solution is based on simple configuration with no custom code, it is reasonably straight forward to configure & customize this solution to meet your organization’s business needs. The following items are some simple examples of what can be changed to drastically improve the relevance of this solution for your business:

  • Business Drivers
  • Project Plan Template
  • Enterprise Project Types
  • Workflow Phases and Stages
  • Project Detail Pages and Custom Fields

Making it real

In order to fully realize the benefit of this solution, you’ll want to engage a Microsoft Partner http://pinpoint.microsoft.com or Microsoft Consulting Services, who can help you with the following items:

  • SharePoint Governance & Maturity Model
  • Solution Customization
  • Business Process Engineering
  • End user training

Learn about Microsoft Project Server 2010

For more information on workflows in Project Server please visit Project Server 2010 Demand Management site for more information: http://technet.microsoft.com/en-US/projectserver/ff899331.aspx.

 

About the Authors

Scott Jamison is Chief Architect and CEO at Jornata (www.jornata.com), the worldwide experts in implementing collaborative solutions based on Microsoft SharePoint. Scott is an experienced leader with almost 20 years directing managers and technology professionals to deliver a wide range of business solutions for customers. Scott is co-author of Essential SharePoint 2010.

 

Mark Candelora is a Managing Consultant and Development Manager at Jornata. Mark is an experienced developer and architect with almost 10 years’ experience developing and delivering business-centric applications using various technologies including Microsoft SharePoint.

 

Christophe Fiessinger is Senior Technical Product Manager for Microsoft. As part of the Microsoft Office Division Product Marketing Group he focuses on the enterprise project and portfolio management solution. Be sure to check out the latest blog entries: http://blogs.msdn.com/chrisfie

 

 

Artifacts

The files below are required to complete the installation of this solution.

 

The following files are available for download at http://go.microsoft.com/fwlink/?LinkID=218030

  • 5db.zip – contains 5 database backup files (the files are used as part of Option 2: Restoring From Database Backup Files)
  • SPEnhProj.wsp – a SharePoint solution file containing the custom site template that will be used for projects in this solution
  • SharePoint Enhancement.mpt – a project template for use with the enterprise project type in this solution
  • WF Images.zip – contains the images that will be used for the Workflow Visualization web part.

 

The following files are available as part of the deployment download from the Microsoft Project Server 2010 Solution Starters at http://archive.msdn.microsoft.com/P2010SolutionStarter

  • DynamicWorkflow.zip – contains the deployment artifacts for the Dynamic Workflow solution
  • Workflow Visualization WebPart.zip – contains the deployment artifacts for the Workflow Visualization Web Part

 

 

List of Figures

Figure 1 – Deploying the Dynamic Workflow solution    12

Figure 2 – Stage 1: Initial Proposal    15

Figure 3 – Stage 2: Initial Requirements    16

Figure 4 – Stage 2: Approval    16

Figure 5 – Stage 3: Project Selection    17

Figure 6 – Stage 3: Approval    17

Figure 7 – Stage 4: Execution    18

Figure 8 – Stage 5: Post Mortem Assessment    18

Figure 9 – Dynamic Workflow Stage Definition    19

Figure 10 – Database Restore    22

Figure 11 – Add Content Database    23

Figure 12 – Deploy Dynamic Workflow Solution    24

Figure 13 – Deploy Workflow Visualization webpart    24

Figure 14 – New Project Web Application    25

Figure 15 – Site Workflow Association    26

Figure 16 – Site Workflow Association    26

Figure 17 – Adding Content DB to IW Demo VM    27

Figure 18 – Provision PWA instance in IW Demo VM    28

Figure 19 – PWA instance under Intranet.Contoso.Com    28

Figure 20 – Create a new SharePoint Enhancement    29

Figure 21 – Initial Requirements form asks the business analyst to enter baseline estimates for the proposal.    30

Figure 22 – Use generic resources to create a baseline estimate of tasks. This will allow you to substitute in actual resources once the execution phase begins.    30

Figure 23 – Creating a resource plan allows you to schedule your resource allocation when planning your project portfolio.    31

Figure 24 – The strategic impact page allows you to rank a project proposal against the company’s business drivers to estimate the gains realized by implementing the project.    31

Figure 25 – Approving the project allows the proposal to move to the selection stage.    32

Figure 26 – Review your projects’ strategic impact rankings.    33

Figure 27 – View your cost constraints for your project portfolio.    34

Figure 28 – View details of resource requirements for each project to see what you’re lacking for each resource/time period.    35

 

References

Microsoft Project 2010 Resources:

Product information

End-User Product Help

Interactive content – Videos & Sessions & Webcasts

Project Professional 2010 and Project 2010 Demo Image:

IT Professional related – TechNet

Developer related – MSDN

Got Questions? Search or ask in the official Microsoft Forums!

SharePoint 2010 Products:

Guide to the Data Development Platform for . NET Developers

MSDN Library

.NET Development

Articles and Overviews

Data Access and Storage

ADO.NET

XML for Analysis Specification

Microsoft Data Development Technologies At a Glance

Guide to the Data Development Platform for .NET Developers

Hello, Data

Microsoft Data Development Technologies: Past, Present, and Future

OData by Example

Testability and Entity Framework 4.0


Guide to the Data Development Platform for .
NET Developers


SQL Server Technical Article

Published: November 2009

Applies to: SQL Server 2008

Summary: This whitepaper covers all facets of the .NET data development platform. This includes both client-side and service-based APIs but also .NET APIs for programming at a server level inside the SQL Server 2008 database and for developing and testing a SQL Server database application. It also includes information on future directions of the .NET and SQL Server development platform.

Introduction

The large majority of applications use a database to store, query, and maintain their data. Almost all of them use a relational database that is designed using the principals of data normalization and queried with set-based queries using the SQL query language.  Application programmers need to tie user activities, such as ordering products and browsing through lists of items for sale, to database activities like SELECT statements and stored procedure execution in a visually pleasing and responsive graphical user interface. They need data access and data binding to the user interface that is encapsulated in easy-to-use components that employ the same object-oriented concepts that are used in the rest of the application. To develop responsive, robust applications on-time and on-budget, programmers need to reduce the time from model to implementation, as well as to easily write automated test cases to ensure the application is robust and scalable.

But data usage doesn’t end there.  One of the key challenges today is to make sense of the data in timely manner.  This means applications often add Business Intelligence to the OLTP (online transaction processing system) data that is managed in a relational database for timely decision making within OLTP applications. The OLTP data can be imported, exported, combined with other data and transformed using ETL (extract, transform, and load) technologies, and analyzed using an OLAP (online analytical processing) system. Patterns and insights can be teased from the data using smart data mining algorithms. User interactions can be real-time, but reports serve an important role in data presentation and can be embellished graphically using charts, gauges, maps and other user-interface controls.  Programmers enhancing applications with Business Intelligence should be able to leverage existing experience by using the same basic programming APIs and methodology as they use in application development.

Since the .NET framework 1.0 was released in 2002, it has become programmers’ preferred framework for developing software, including data-driven applications, on Microsoft platforms. .NET 1.0 includes a set of classes known as ADO.NET to provide a substrate for database development. ADO.NET is based on a provider model. Databases products (like SQL Server, Oracle, and DB2) hook into the model by delivering their own data provider. .NET 1.0 shipped with three data providers; bridge providers to existing ODBC drivers and OLE DB providers, and the SqlClient provider for the SQL Server database. Over time, not only has the SqlClient provider been improved to keep pace with innovations in the SQL Server database, but SQL Server has expanded the number of integration points between the database and its related features and the .NET framework. Today .NET APIs are an integral part of the SQL Server product. In this whitepaper, I’ll describe the current state of the Microsoft Data Development Platform and illustrate the deep integration between .NET and SQL Server to show why SQL Server is the preferred database product for .NET developers. The .NET database platform has been enhanced since the advent of .NET to include functionality like object-relational mapping and a programming language integrated query language.  I’ll also discuss the direction that platforms and integration will take in the future so you can plan your overall database development strategy.

Flexible Data Access Frameworks for Productive .NET Application Development

Data Access Frameworks are what developers normally consider to be “database programming”.  .NET developers have a choice of using the low-level ADO.NET object model or the higher-level conceptual model as defined by the ADO.NET Entity Framework. You can also program you client-level database access using through a well-known REST-based services model, exposing the database as a service in the world of Software As A Service. Each of the client-level data access frameworks integrate seamlessly with multi-tier applications and with visual controls in the .NET platform.

ADO.NET – The Substrate

ADO.NET is the base database API and the basis for all of today’s .NET-based data access frameworks. All existing .NET applications in releases prior to .NET 3.5 use this API. Programmers should use ADO.NET when they want direct control of the SQL statements used to communicate with the underlying database and also to access specific database features not supported by the ADO.NET Entity Framework.

Microsoft ships three SQL Server-related ADO.NET data providers. System.Data.SqlClient is the provider for the SQL Server database engine, mentioned previously. Microsoft.Data.SqlCe is a provider for SQL Server Compact Edition that works on the desktop or on compact devices. There is also a specialized data provider for SQL Server Analysis Services and Data Mining functionality (the ADOMD.NET provider), which will be mentioned later.

 ADO.NET uses the “connection-command-resultset” paradigm. Programmers open a Connection to the database, and issue Commands consisting of stored procedures or SQL statements that can either perform insert, update, and delete operations, or select statements that return a DataReader (resultset). Additional classes encapsulate transactions, procedure parameters, and error handling. ADO.NET 1.0 and 1.1 used an interface-based design; that is, programmers used database-specific classes that implemented the well-known IDbConnection, IDbCommand, and IDataReader interfaces. ADO.NET 2.0 added base classes so database-specific classes derived from the generic DbConnection, DbCommand, and DbDataReader classes. The base classes or interfaces provide the generic functionality; database-specific functionality is encapsulated in provider-specific classes (e.g. SqlConnection, SqlCommand, and SqlDataReader). For more information reference “Generic Coding with ADO.NET 2.0 Base Classes and Factories” at http://msdn.microsoft.com/en-us/library/ms379620(VS.80).aspx.

Database transactions are accommodated in the model in two ways. Local transactions can be represented by a DbTransaction object which is specified as a property of the connection object. Alternatively, .NET developers can use the System.Transactions library. System.Transactions supports local and distributed transactions and passes and tracks transactions through an instance of the TransactionScope class.

The specific server, database, and other connection parameters are specified in a connection string. The preferred method for specifying a connection string is to include it in the application’s .NET configuration (.config) file. There is a standard configuration file location used for ADO.NET connection strings.

Programmers use the .NET type system in their applications, but relational data types in the database, so there needs to be a low-level .NET type system-to-database type system correspondence. Database types mostly follow the ISO-ANSI SQL standard type system, but database vendors can include database-specific types. In ADO.NET, common database types are represented using an enumeration for the common types, System.Data.DbTypes. Programmers rely on documented mapping of their database’s data type to DbTypes, but providers can also add database-specific types.  SqlClient accommodates SQL Server-specific types by using a SqlDbTypes enumeration. A type correspondence mismatch occurs because most types in the .NET type system have nothing that distinguishes database NULL from empty instance of a type. A set of SQL Server-specific types in the System.Data.SqlTypes namespace not only encapsulate the concept of database NULL by implementing an INullable interface, but are isomorphic with the data types in SQL Server. .NET 2.0 adds a generic nullable type that works with .NET value types (structures). Nullable types can be used as an alternative to System.Data.SqlTypes.

The SqlClient provider is the lowest-level .NET data access APIs and also the closest to SQL Server. It is the only client database API that supports all of the latest enhancements in SQL Server. Programmers must use native ADO.NET calls when they need access to these features. For example, only ADO.NET and SqlClient directly support the Filestream storage with streaming I/O and the table-valued parameters features of SQL Server 2008. SqlClient is also the only database API to directly support SQL Server data types implemented in .NET, such as SQL Server 2008’s spatial data types and hierarchyid type, as well as SQL Server UDTs (user-defined types) implemented in .NET in the server and on the client. The SQL Server and .NET teams work closely together; an example of this is the introduction a new .NET primitive type, the System.DateTimeOffset data type in .NET 3.5 that corresponds to SQL Server 2008’s datetimeoffset data type. This makes it straightforward for programmers to take advantage of the very latest SQL Server features.

ADO.NET also provides for a “disconnected update” scenario using a class called the DataSet that contains of collection of DataTable instances with collections of DataRow and DataColumn objects. The DataSet interacts with the database through a DataAdapter class, and includes relational database-like features such as indexing, primary keys, defaults, identity columns, and constraints, including referential constraints between DataTables. Programmers can use the DataSet as an in-memory object model for database data, although with the advent of the ADO.NET Entity Framework a model based on entities is preferred because you program against business objects rather than a relational-style object model.

LINQ – Native Data Querying integrated into .Net Languages

LINQ is a SQL-like, strongly-typed, query language that is implemented using built-in programming language constructs in .NET. This provides the ability to query any implementation of a .NET IEnumerable<T> class using SQL syntax. LINQ defines a standard set of query operators that allow both set-based (e.g. projection, selection) and cursor-based (e.g. traversal) queries using programming language syntax. LINQ uses a provider model to allow access to domain-specific stores such as relational database, XML, Windows Active Directory, collections of objects, and ADO.NET DataSets. In most cases, the providers work by translating LINQ queries to queries against the domain-specific store.

These Using LINQ queries that produce SQL query statements gives programmers compile-time syntax checking and IntelliSence, reduces data access coding time compared to coding with raw SQL strings, and also provides protection from SQL injection. This is an improvement for programmers who don’t use stored procedures and code SQL strings directly in application code, because these SQL strings are not syntax-checked by the underlying programming language compiler.

Two implementations of LINQ over ADO.NET were introduced in .NET 3.5. LINQ to SQL is a SQL Server-specific implementation over a collection of tables represented as objects. It includes a mapping tool that maps tables to objects, stored procedure support, and implements almost all LINQ operators by translating them into T-SQL. Programmers can insert, update, and delete rows from database tables through objects using a DataContext class, which extends the basic LINQ functionality.  A similar LINQ provider also exists for the SQL Server Compact Edition. 

Programmers use the ADO.NET DataSet object because it represents its data as a familiar collection of tables that contain columns and rows.  There is also a LINQ provider over the DataSet class that provides functionality not included in the original DataSet classes using standard SQL syntax. This implementation allows selection, projection, joins, and other LINQ operators against the DataSet’s collection of DataTables. Programmers that use the DataSet in existing applications can use the LINQ provider to extend the base functionality. Using LINQ to DataSet enhancements does not change the paradigm that filling the DataSet with database data and updating the database when changes are made to the DataSet is controlled by the DataAdapter class.

ADO.Net Entity Framework and the Entity Data Model Overcome the Object Relational Impedance Mismatch

Object-oriented programming concepts have taken hold in the development community, enjoying the same widespread popularity as the relational normalization-based design and set-based programming has in the database community. Therefore, there is a need to bridge the gap between relational sets to collections of objects and map operations on object instances to changes in the database. Programmers can bridge this gap, known as the object-relational impedance mismatch, by using the ADO.NET Entity Framework.

The ADO.NET Entity Framework should be considered the development API of choice for .NET SQL Server programmers going forward. The Entity Framework raises the abstraction level of data access from logical relational database-based access to conceptual model-based access. For more information on this, reference the whitepaper “Next-Generation Data Access: Making the Conceptual Level Real” at http://msdn.microsoft.com/en-us/library/aa730866(VS.80).aspx.

The Entity Framework consists of five main parts. These are the Entity Data Model (EDM), the EntityClient provider and Entity SQL language, the ObjectServices libraries, and the LINQ to Entities provider.

The Entity Data Model (EDM) specifies the conceptual model that you program against and it’s mapping to underlying database primitives. With EDM, programmers specify the classes and relationships that will be used in the object model (conceptual schema), how tuples and relations are represented in the database (physical schema), and how to map between the two (mapping schema). EDM directly supports table-per-type mapping as well as modeling inheritance using either table-per-hierarchy or table-per-concrete-type. But mapping supports more than just database tables. Mapping entities to stored procedure resultsets and projections as a ComplexType, database views, and on-the-fly projection mapping as anonymous types are also supported. The EDM is specified by using a Visual Studio designer against a set of XML schemas. This metadata can be represented as an XML file (edmx) which can compiled into a program resource.

Entity Framework is layered over ADO.NET and most ADO.NET data providers have been enhanced to work with it. Programmers can work directly with an Entity Data Model using ADO.NET by using the EntityClient provider. To use EntityClient, you specify both the Entity Data Model and the underlying data provider (e.g. SqlClient) in the ADO.NET connection string.

The entity framework defines a language called Entity SQL that extends traditional SQL with some additional object-oriented query constructs.  Entity SQL is the low-level query language of the Entity Framework that is used to query against the conceptual model.  Finally, the Entity Framework includes an implementation of the LINQ query language, known as LINQ to Entities. LINQ to Entities implements almost all of the constructs in Entity SQL.

You can program the Entity Framework using three different programming models: LINQ to Entities, ObjectServices, or “native” ADO.NET using the EntityClient provider. Both EntityClient and ObjectServices use Entity SQL strings directly. Although some lower-level query functionality can only be expressed using EntitySQL, using LINQ to Entities is the overwhelming favorite among programmers that use the Entity Framework. LINQ to Entities is preferred for most development because using LINQ query language instead of EntitySQL gives programmers compile-time syntax checking and IntelliSence, reducing coding time.

The Entity Framework includes a type system that supports Entity Types, Simple Types, Complex Types, and Row Types. Entity Types contain a reference to an instance of the type called an EntityKey. The set of supported Simple Types is a subset of data types found in most relational databases. A complex type is a set of multiple simple types, e.g. a resultset from a stored procedures or a projection from a SQL query. Entities and Associations between entities live in containers known as EntitySets and AssociationSets.  EntityContainer is the top level object that usually maps to a database or application’s scope.

The programming model supports insert, update, and delete operations against the entities by tracking changes with an ObjectContext. The model can generate the SQL required for these operations or they can be mapped to a set of user-written stored procedures. Both lazy and eager loading of related entities is supported. Transactions are accommodated by using System.Transactions.

Entity Framework 4.0, which will ship with .NET 4.0, adds a variety of new features that support a model-first methodology, test-driven development (make it easier to create a mocking layer for testing ), and persistence ignorance. In addition, there are improvements in the SQL generation layer, both in the translation of LINQ to Entities queries and in the SQL Server provider specifically that make the generated SQL more performant. EF 4.0 also directly supports a representation of foreign keys between entities which is useful for data binding.

ADO.NET Data Services – RESTful Data Access

All of the database client APIs that I’ve mentioned so far require a direct connection to the database from either the client or middle-tier. But popular programming models such as Silverlight and AJAX (Asynchronous Javascript and XML) interact with all data sources directly from browser code, using either XML or JSON (JavaScript Object Notation) payloads.  This pattern requires programmers to use a middle-tier service that’s exposed through either SOAP or REST-based APIs. ADO.NET Data Services is the database API that exposes the data as a REST-based Web Service. The service is a WCF-based (Windows Communication Foundation) service that exposes database operations through the standard HTTP operations against a REST endpoint like GET, POST, PUT, and DELETE.

ADO.NET Data Services can use an Entity Framework model as an endpoint directly. It’s also able to work with other LINQ-based data sources (including LINQ to SQL) for reading. If you would like your non- EF data source to be available over ADO.NET data services for updating, you must provide an implementation of the IUpdateable interface.  SQL clauses like the WHERE, GROUP BY, and ORDER BY clause are implemented as parameters on the URL.

ADO.NET Data Services exposes entities (or any sets of data, such as SharePoint lists in SharePoint 2010) as either ATOMPub feeds (ATOM is a standard XML format used for data feeds) or collections of JSON objects. JSON objects are especially useful with AJAX clients. Because security needs to be considered in any service available on the Internet, granular data access security is enabled by using a set of data access expressions coded in the InitializeService method in combination with the authentication and authorization scheme of your choice. By default, no resources or associations are available unless you override the default InitializeService implementation. To build a robust application, ADO.NET Data Services clients may also require metadata; that is, information about the data available through a specific service endpoint.  A REST resource endpoint that supplies metadata for discovery purposes is also part of the model.

In addition to the capability to serve data, ADO.NET Data Services includes a client consumer API. In this API, the client “connects” to the endpoint by using the REST URL. Programmers query the data with a special LINQ provider that fetches updateable EDM object instances. This pattern encapsulates ADO.NET Data Services access so that it looks more like traditional data access code rather than raw HTTP and XML calls.

Programmers can use rich application frameworks that start with ADO.NET Data Services as a REST-based substrate for interacting with a database. One such framework is ASP.NET Dynamic Data, a template-based framework that allows you to build a data driven application quickly. It can use ADO.NET Data Services and LINQ to SQL or Entity Framework to access the relational database. Another framework is .NET RIA Services, which provides a set of components that allow building rich Internet applications (hence the acronym) using common patterns in ASP.NET, service-oriented architecture, and Silverlight 3.0 as a graphic user interface. It includes a pair of service models that hook directly into the ADO.NET Entity Framework, LINQ to SQL, or ADO.NET Data Services. This allows for data to be fetched though a service (a la ASP.NET AJAX) and bound to Silverlight data controls. These are mapped by .NET RIA Services to database maintenance functionality in the service models. .NET RIA Services is currently part of Visual Studio 2010 Beta2.

Harness your .Net skills within the Database for Scalable, Reliable and Manageable Applications

Data access APIs are normally considered as programming outside the database server. A specific database product, such as SQL Server, exposes programming models for in-server programming as well. Beginning with SQL Server 2005, SQL Server permitted in-server programming of database objects such as user-defined functions using .NET APIs as addition to traditional programming in native database languages such as SQL, MDX , and DMX. In addition, SQL Server provides mechanisms to extend and customize the product itself using .NET APIs, and .NET support extends deep into all facets of the server product. With SQL Server as the database, programmers can leverage their existing knowledge of .NET programming to add value to existing applications and also leverage knowledge of .NET APIs such as ADO.NET, which can be used both “outside” and “inside” the server. .NET integration with the database engine includes support for both application programmers and database administrators.

Programming SQL Server with .NET

Since SQL Server 2005, programmers have had the ability to incorporate .NET in database object code as an adjunct and alternative to using T-SQL. Hosting .NET code inside the SQL Server engine (that is, using SQL Server as a .NET runtime host) is commonly referred to as “SQLCLR”. One use of this .NET support is as a means to access external resources such as the registry or file system directly from database code, without resorting to unsafe mechanisms such as enabling xp_cmdshell or using undocumented system extended stored procedures. In addition, the programmer has a choice of .NET or T-SQL when writing user-defined functions, stored procedures and triggers. This allows a choice of best-of-breed mechanisms, that is, the ability to use .NET code where it’s more performant (such as regular expressions and complex calculations) and T-SQL when it’s more performant (database access and complex joins).

The SQL server team designed its .NET integration with security and reliability in mind, as well as performance. Safe SQLCLR code is limited to a subset of system assemblies that have been hardened and tested with SQL Server. These assemblies are guaranteed not to leak memory or other external resources, such as file handles, by the use of such coding primitives as SafeHandle and ConstrainedExecutionRegion introduced in .NET 2.0. Changes to the .NET 2.0 hosting APIs allow .NET resources such as memory allocation, thread pooling and scheduling, and exception handling, to be controlled by and integrated with the rest of the SQL Server engine. SQLCLR code runs in process, as part of the SQL Server service process (sqlservr.exe) for best performance, not in a separate service process.

Data access in SQLCLR database objects follows the ADO.NET object model with optimizations for in-server processing. SQLCLR code can directly execute T-SQL code, rather than requiring a separate ODBC or OLE DB connection (as extended stored procedures require) that use more memory and must be enlisted in the current transaction manually. This is accomplished by using a mechanism to give the appearance of an ordinary SqlConnection instance, but hook up memory pointers for direct access to database buffers, parameters, transaction space, and lock space. The SQLCLR code is part of the current connection. This internal connection is requested by a special connection string parameter “Context Connection=true”.

Server-side programmers use the SqlClient provider and the traditional Connection-Command-DataReader paradigm with a few extensions to accommodate server-specific constructs. This not only lets programmers familiar with ADO.NET leverage their existing expertise, but also allows the production of data access code that’s relatively portable from in-server to middle-tier and even to client should the need arise, with few changes to the code. The additional constructs for server-side code include a SqlContext and DbDataRecord. The SqlContext provides access to the code executor’s identity for access to external resources and a reference to a SqlPipe object. The SqlPipe is used to directly output resultsets or messages.  The DbDataRecord allows the programmer to synthesize resultsets from external resources. You can use this to represent Web Service results or files as a resultset from a stored procedure. In ADO.NET 3.5 and SQL Server 2008 a collection of DbDataRecord objects can also be used to construct table-valued parameters in client code.  Server-side programmers can also write .NET-based table-valued user-defined functions that expose any computed or refactored data as a set, in keeping with the T-SQL set-based paradigm.

There is a well-defined mapping between SQL data types and .NET data types and SQLCLR code must adhere to the mapping to be interoperable with T-SQL code. Although this mapping is mostly the same as client-side data type mapping, some data types (e.g. varchar, the non-Unicode string data type) do not have direct .NET equivalents and these cannot be used in SQLCLR result sets or parameters. In addition, .NET data types that do not have SQL Server equivalents (e.g. System.Collections.Hashtable) cannot be returned to T-SQL code, unless they are transformed into types (such as a collection of DbDataRecord objects returned from a stored procedure as a resultset) that T-SQL can work with.

Server-side programmers can also write SQLCLR user-defined aggregates and user-defined types (UDTs). User-defined aggregates extend the reach of aggregate functions to allow functions that are not built into T-SQL, such as covariant of a population. Programmers can use UDTs to create new scalar data types that extend the SQL Server type system.  UDTs and user-defined aggregates must be coded in SQLCLR, not T-SQL. The execution engine recognizes the new aggregate functions and includes them in the query plan, even allowing the aggregation function to be parallelized as with other query plan iterators.

Extended .Net Data Types and Code in SQL Server

In SQL Server 2008 some new features of the SQL Server engine itself were written in .NET code.  The new Policy Based Management and Change Data Capture features use .NET internally. This indicates that the SQL Server team has made a commitment to use SQLCLR code as an adjunct to unmanaged code where it is useful. Although there is a database option to disallow user-written SQLCLR code, this does not prevent the execution of system CLR code which is always enabled. Thus, in SQL Server 2008, .NET code is used as part of the engine’s standard operating environment.

Three new data types were introduced in SQL Server 2008 that are implemented using the SQLCLR user-defined type architecture; these are the spatial data types, geometry and geography, and the hierarchyid data type. The code that implements these data types in contained in a .NET assembly, Microsoft.SqlTypes.Types.dll. This assembly is required for normal server operation.  One reason for the implementation is that many different operations can be performed on these data types, especially the spatial types and the natural way to expose these is through type-specific methods.  The geometry data type, for example, has over 60 domain-specific methods and many properties that can be manipulated with the SQL-1999 standard “instance.method” syntax in T-SQL, instead of including a library of system functions that apply only to a single data type. This serves to better encapsulate the data type’s functionality.

Implementing built-in data types in .NET code also provides benefits for programmers that use the types. Firstly, the programming model for these types is identical whether you’re writing a SQL query or client-side object manipulation code.  This allows server programmers writing SQL with the instance.method syntax to leverage their expertise when writing either client code or SQLCLR stored procedures. In addition, Microsoft.SqlTypes.Types is distributable as part of the SQL Server Feature Pack, and useable directly on the client-side (in map-drawing applications for example) without a need to connect to the server at all. The server is used for data storage, SQL queries that may include spatial functions, and spatial indexing.

Programmatic Management with SQL Server Management Objects

.NET is used in programming administrative functions in the SQL Server Engine as well through the SQL Server Management Objects (SMO) libraries. SMO is a set of libraries that are used to manipulate and manage database objects (e.g. create a new database, configure network protocols, backup and restore the database). These libraries are used by SQL Server Management Studio (SSMS) Object Explorer and are available to programmers and administrators as well. This provides an additional choice to manage the database, serving as an alternative to SQL DDL statements and system stored procedures. Because they are used to write the SQL Server Management Studio utility, internal Policy Based Management code, and SQL Server Configuration Manager, the SMO libraries are quite comprehensive. These libraries also contain a special Scripter class that can be used to produce a SQL DDL script for any supported database object. This functionality is used extensively in SSMS. The two main functions of the library are DDL type operations and configuration operations in which the library is actually a .NET wrapper around WMI (Windows Management Instrumentation) operations. In addition, there is a set of classes that enable creating and controlling trace objects and a set of classes for creating and managing SQL Server Agent jobs. SQL Server also ships with a .NET library for configuring replication. The RMO (Replicate Management Objects) library can be used as an alternative to the system stored procedures for configuring SQL Server replication.

PowerShell Support in SQL Server

SQL Server 2008 introduced support for PowerShell, a new Windows shell and successor to the command shell. This product is part of the Microsoft Common Engineering Criteria and is supported as an administrative interface by almost every product in the Windows platform. IIS, Exchange, and Active Directory are parts of the platform that currently ship with PowerShell support. Third-party products such as VMWare are beginning to provide PowerShell support as well.

Because systems administrators or programmers often assume database administrator responsibilities in smaller installations and PowerShell is a common administration tool it is thought that “multi-product administrators” will leverage their PowerShell expertise to manage SQL Server, while specialized SQL Server-only DBAs will continue to use SQL scripts and system stored procedures. Experience with .NET object models is a useful skill when working in the PowerShell environment.

PowerShell is based on programming and scripting objects, rather than passing text between commands in scripts as traditional shells (e.g. CMD, csh) do. PowerShell uses cmdlets to manipulate the system; these are programmed in .NET code. In addition, products can expose administrative functionality through a provider model. PowerShell providers assist in generic knowledge transfer by representing system configuration using the file system paradigm combined with a standard set of operations such create-alter-delete on product nodes. SQL Server 2008 includes both cmdlets and a provider. SQL Server ships a custom shell (named SQLPS.exe) in which the cmdlets and the provider are pre-installed to prevent naming conflicts. You can also manually install the cmdlets and provider for use in the main PowerShell shell.

The provider exposes a few directories under the SQLSERVER: virtual directory.

  • SQL  – The SQL database engine objects (servers, databases, tables)
  • SQLPolicy – SQL Server policy-based management
  • SQLRegistration – Server registrations in SSMS
  • DataCollection –  Data collection for the Management Data Warehouse

   In SQL Server 2008 R2, two additional subdirectories are added

  • SQLUtility –The SQL Server Utility (Enterprise Multi-Server Management)
  • SQLDAC –DAC (Data-tier application databases)

There are only a few cmdlets in SQL Server 2008, including cmdlets for mapping between SQL Server database object names and PowerShell object names, a cmdlet with similar functionality to SQLCMD and cmdlets for executing policy-based management functions. Because the provider uses SMO objects to represent database objects internally, programming in PowerShell with SQL Server consists of quite a bit of navigation to the appropriate place in the provider virtual directory and then using SMO objects to perform operations.

The ability to use the SSMS Object Explorer context menu to bring up a PowerShell window at the appropriate position in the database object provider hierarchy as well as the ability to invoke PowerShell script as a SQL Agent job step round out the SQL Server 2008 PowerShell support.

Enrich your Applications with Embedded Business Intelligence

Business Intelligence is a set of tools and technologies designed to improve business decision making and planning. It generally includes reporting, quantitative analysis, and predictive modeling. Programmers can use their .NET skills along with tools in SQL Server in the programming of decision support systems.

.NET APIs are not only integrated with the database engine, but are pervasive in every part of the SQL Server product. .NET is used for four main Business Intelligence functionality groups:

  • Programming Business Intelligence client applications
  • Application programming at the server product level
  • Administration and Configuration programming
  • Writing product extensions

In this section, I’ll briefly describe the .NET facilities used by SQL Server Reporting Services (SSRS), SQL Server Analysis Services (SSAS), and SQL Server Integration Services (SSIS).

Integrate Reports & Data Visualization within your application with SQL Server Reporting Services

SQL Server Reporting Services is the tool of choice for programming reports to display in custom Web or Windows applications using the ReportViewer control , or publication in a central repository for convenient, secure, user access. Programmers can use SQL Server or Microsoft Office SharePoint Server as a repository or include report definitions directory in applications.  All of the entry points into the SSRS product allow programmers to leverage their .NET coding experience.

The report definition language (RDL) is a XML document format based on a published XML schema. SSRS reports are usually designed and published using Business Intelligence Development Studio (BIDS) or the graphic Report Builder tool. Reports can use Data Sources as input (SQL Server and a variety of other data sources are supported) and work with tables, views, and stored procedure data directly, or Report Models that expose metadata and relationships in a more user-friendly form. Report Models are designed and programmed in Visual Studio and can be generated against a SQL Server or Oracle database or an Analysis Services database.

The Report Manager is an ASP.NET-based application that allows administrators to manage reports, common data source defintions, and subscriptions, and also provides the ability for users to view reports in HTML format. Reports can be rendered in a variety of formats and will even be exposed as data (AtomPub feeds) in SQL Server 2008 R2. 

There is not a custom object model for building RDL, but because RDL is XML-based the .NET XML APIs can be used to built raw RDL with built-in schema validation. Programmers can integrate reports into their own applications or manage reports using a Web Service interface, as the functionality is exposed using Web Services. The Report Viewer control can use URL access or Web Service calls to access published reports, or use reports in local RDL files.

Reports can use custom code that extends the functionality provided by the SSRS built-in expression language. This custom code must be provided in .NET assemblies, which are registered with Report Manager and in the development environment. Once registered, custom .NET classes are accessed just as the built-in expressions are.

Administrators can administer the server programmatically by either using the Report Manager Web Service APIs or by writing scripts that are executed using the rs.exe utility. Administrative scripts for the rs.exe utility are programmed using Visual Basic .NET scripting language.

Reporting Services uses a componentized architecture and, as such, allows user-written extensions. These extensions use SSRS-specific .NET objects models. Some of the components that can be extended with custom .NET code are: data processing extensions, delivery extensions, rendering extensions, and security extensions.  The object models are provided in two .NET libraries, Microsoft.ReportingServices.DataProcessing and Microsoft.ReportingServices.Interfaces that are shipped with SQL Server.

ADOMD.NET – Client side Analysis Services

Programmers that expose Business Intelligence functionality, such as information from OLAP cubes in custom .NET applications must use an API to extract the information from the OLAP server just as they use an API to extract information from the relational engine.  Although many Business Intelligence users use packaged client applications such as Excel, writing custom web applications that extract analysis and data mining information, such as the familiar “people who brought product X are also interested in product Y” is becoming more and more commonplace. ADOMD.NET is to OLAP applications as ADO.NET is to relational database applications.

ADOMD.NET is a specialized ADO.NET data provider that not only exposes the traditional Connection-Command-DataReader paradigm, but contains additions for multidimensional data and data mining. With OLAP cubes, metadata is not only more complex (it can also be hierarchical in the case of hierarchical dimensions) but also more important, because OLAP client applications often rely on discovery through metadata at runtime. ADOMD.NET exposes this metadata in three different ways: through provider-specific properties (e.g. Cubes and MiningModelCollection properties) on the Connection object, through an extensive set of SchemaRowsets (accessed through the GetSchemaRowset method as with traditional ADO.NET providers), and through an ADOMD.NET-specific object model devoted to metadata, rooted at the CubeDef class. Because analysis queries can be long-running, the ADOMD.NET provider supports multiplexing Connection instances through a Session class. Programmers use ADOMD.NET on the client by adding a program reference to the Microsoft.AnalysisServices.AdomdClient.dll.

Multidemensional resultsets are also different than rectangular relational resultsets. These resultsets can have multiple axes as opposed to the two-axis column and row relational rowsets. The ADOMD.NET provider supports using the AdomdCommand class to execute MDX (MultiDimensionalExtensions) or DMX (DataMiningExtensions) language commands. These commands can return a two-dimensional AdomdDataReader or a multidimensional CellSet. The CellSet is shaped like a cube rather than a table, and programmers access data consisting of CellCollections and retrieve axis information using the Axes property. You can also use a DataAdapter to get cells in the form of a DataSet. The provider is currently read-only but supports composing multiple operations in a read-committed level transaction.

ADOMD.NET ships with SQL Server or is available separately as part of the SQL Server 2005-2008 Feature Pack.

ADOMD.NET – Server side Analysis Services

In SSAS you can write stored procedures or user-defined functions in .NET using the ADOMD.NET library. The server library has almost the exact same object model as the client library but slightly different functionality. To write in-server .NET code, you reference Microsoft.AnalysisServices.AdomdServer.dll.

User-defined functions are used in the context of either an MDX or DMX statement, as part of the statement itself.  Stored procedures are executed stand-alone with the MDX or DMX CALL statement. Both UDFs and stored procedures can take parameters, but UDFs can return a variety of data types including instances of the ADOMD.NET Set object. Stored procedures can only return sets, as IDataReader or DataSet. A stored procedure can also have no return value. The reason for using ADOMD.NET with in-server objects is to decrease network latency (one single network call as opposed to possibly many network calls from the client) and to encapsulate data logic for reuse.

In server-side ADOMD.NET, you do not get a Connection object in order to access the data. Instead, you start by obtaining a Context object that represents your current connection. The Context object does not expose any schema rowsets, you must use the object model to access metadata. A UDF is created for either MDX or DMX and cannot mix functionality in a single module. Therefore, depending on the calling context, either a CurrentCube or CurrentMiningModel property is available from the context object. ADOMD.NET functions and stored procedures are read-only. They do not support writeback and therefore do not support transactions.

Analysis Management Objects

AMO is an object model for creating, altering, and managing Analysis Services cubes and Data Mining models. It is analogous to SMO in the database engine. In addition to a comprehensive set of classes to manipulate metadata, AMO contains classes that allows the .NET administrator to automate backup and restore of Analysis Services databases,  a Trace class for monitoring, replay, and management of SSAS profiler traces, and CaptureLog and CaptureXML classes for scripting capturing and scripting operations performed by SSAS SMO statements. Once again, administrators with the ability the write .NET code can leverage their expertise with AMO. SSAS also supports an XML-based scripting language for management known as ASSL.

Microsoft does not ship a formal PowerShell provider or cmdlets for SSAS. However, you can use AMO classes with PowerShell in an analogous way as the SMO classes are used. In addition, a PowerShell snapin including a provider and cmdlets called PowerSSAS is available on the CodePlex website at http://www.codeplex.com/powerSSAS .

Integrated Your Data with Ease with SQL Server Integration Services

SSIS is an extract-transform-load (ETL) tool that is used to build database workflows called packages. SSIS packages are designed for fast data transformation and loading using an in-memory “data pipeline”, often using the bulk extract-bulk insert functionality of databases directly. SSIS allows programmers to concentrate on writing workflow logic rather than building custom code for each ETL scenario. Programmers often use SSIS packages to populate Analysis Services cubes and train Data Mining models, as well as import and export of relational data.

The SSIS data flow engine is not written in .NET code, but an ADO.NET provider can be used as either as source or destination of data. SSIS packages are usually created in the Business Intelligence Development Studio (BIDS) environment, but you can also create them programmatically through a .NET object model.  The entire SSIS package model is available in .NET, rooted in a class named Package.  You can also manage and execute packages programmatically using the classes in Microsoft.SqlServer.Dts.Runtime.dll. Management classes allow you to run packages locally or on a remote machine, load the output of a local package, load, store, and organize packages into folders, and enumerate existing packages stored in the SQL Server database or file system. SQL Server Agent is usually used to run the packages on a schedule or an ad-hoc basis.

SSIS packages consist of control flows and data flows. Control flows contain tasks, with special enumerator components to allow the execution of a task over a collection of items (e.g. executing an FTP task over a collection of files). Data flows can contain source providers, destination providers, and transformation components. Data source connection information for the providers (e.g. connection strings to databases) is managed by Connection Manager components. Components log information to an informational and error log.

SSIS ships with a rich set of ETL components. It is interesting to note that each SSIS component that is part of the product is implemented in a separate assembly, that is, they are implemented in .NET. Therefore programmers with .NET expertise can use SSIS object models to create custom components that extend the base functionality of implement SSIS integration with ETL items that are not supported by SSIS “in the box”. These components include not only an execution portion but also a designer portion so that the custom component can be used to design packages in BIDS. There’s a .NET object model to enable programmers to build:

·         Custom tasks

·         Custom connection managers

·         Custom log providers   

·         Custom enumerators

·         Custom data flow components – source, destination, or transformation components

Note that you cannot write custom components that derive from system components.

Finally, SSIS ships with script components that allow custom logic in control flows or data flows. This allows script-based lightweight access for customizations without having to build an entire custom component. SSIS includes both a Script Task for task flow and a Script Component for the data flow. In SQL Server 2008, these scripts use Visual Studio Tools for Applications (VSTA) and the script code can be written in either VB.NET or C# (only VB.NET is supported in SQL Server 2005).  Once again, the .NET programmer or administrator should be right at home with SSIS script coding and component customizations.

Sync Services for ADO.NET enables collaboration and offline scenarios for applications, services and devices

As compact devices get more prolific, a common pattern in data access emerges in which data is stored on in a corporate database, with many individual devices needing replication/synchronization of their-specific portion of the database tables. An example would be salespeople that need a local copy of their customers’ data for offline use while on the road. The salesperson also needs to be able to take orders offline then upload changes to the corporate database. For this type of scenario, ADO.NET Sync Services fills the bill nicely, allowing client-directed synchronization between databases such as SQL Server and SQL Server Compact Edition. Sync Services for ADO.NET is part of the Microsoft Sync Framework, which also supports file system sync and feed sync (for RSS and ATOM feeds).

Sync Services works with a provider model and ships with a SyncClient provider for SQL Server Compact Edition. The SyncServer provider works with any ADO.NET data provider. Sync Services support unidirectional synchronization, bidirectional synchronization, or complete refresh with each synchronization. Version 1.0 supports hub-and-spoke style synchronization, Version 2.0 also supports peer-to-peer style synchronization.

The Server provider works with the set of synchronization commands, but also allowed for programmable conflict resolution if the data in the server database changes but there is also a change on the client. SQL Server 2008 includes a Change Tracking feature that allows distinguishing which changes are made on client or server, and allows selective synchronization on a per-client basis. Sync Services for ADO.NET is available for SQL Server Compact Edition on the desktop and on compact devices.

Microsoft Visual Studio Team System 2008 Database Edition

Regardless of which features that programmers use to interact with the database, they need a strong development environment that includes testing, profiling, version control, and software workflow capabilities. Visual Studio Team System (VSTS) is Microsoft’s flagship software development lifecycle product. In addition to application and service development, database programmers and administrators can also use to develop, test, and deploy database objects and schemas using the same software development lifecycle they are already familiar with. VSTS Database Edition includes SQL Server 2000-2008 support and provides a provider mechanism for other database support as well. In addition to the built-in functionality there are customization points for .NET programmers who wish to extend the product.

VSTS Database Edition unit tests are encapsulated in .NET modules and a .NET extension API is available to customize the tests. You’d do this when you need to test for special conditions that aren’t covered by the graphic user interface. Programmers also use VSTS Database Edition to generate test data. A standard set of test data generators are provided, but you can extend this with custom generators, using the classes in Microsoft.VisualStudio.TeamSystem.Data.Generators.dll.

The latest version of the product (VSTS 2008 Database Edition GDR) includes database object refactoring and static code analysis. You can extend the facilities of both of these features by writing custom .NET code.  Custom database refactoring code (e.g. SQL code casing) and custom refactoring targets (e.g. text files) are supported by using a framework of classes in the VSTS Database Edition interop DLLs. Static code analysis rules can be plugged in to the existing rule infrastructure by using Microsoft.Data.Schema.ScriptDom.dll.

Finally, VSTS Database Edition ships with a SQL parser that developers can use to parse and validate T-SQL and generate T-SQL scripts. The parser is not only used by the product but available as a component for programmer use. For further information and an example of the parser’s use, reference http://blogs.msdn.com/gertd/archive/2008/08/21/getting-to-the-crown-jewels.aspx .

The Future Frontier

So far, we’ve covered how you can use .NET as a substrate to develop with databases outside the server, and with SQL Server inside the server product as well. In future products, the vast amount of functionality available to .NET data access programmers as well the synergy between SQL Server and .NET continues to increase.  Here’s a survey of some of the features for .NET programmers coming in the near future.

SQL Azure Database – Database in the Cloud

SQL Azure Database is a cloud-based database built on SQL Server. Because it uses SQL Server underneath, any database API, including Entity Framework, ADO.NET, and others, can be used with SQL Azure Database simply by changing the connection string. Version 1.0 of SQL Azure Database supports almost all of the database objects (e.g. tables, views, and stored procedures) in the SQL Server database engine, but Version 1.0 does not yet support in-database .NET programming (SQLCLR).

Self-Service Business Intelligence

SQL Server PowerPivot For Excel is an Excel 2010 add-in that facilitates the creation of analysis models in Excel using multiple data sources, and also allows the user to define relationships between tables in disparate data sources. This enables an entire population of non-programmers to write Excel apps against their corporate data in a way they never could before, enabling new scenarios in Business Intelligence. The data in the models can be set to automatically update on a schedule and the Excel-based model can be published to a SharePoint server farm. Using SQL Server PowerPivot for SharePoint 2010, the SQL Server PowerPivot Excel-based data can be exposed as an Analysis Services 2010 cube data source. SQL Server PowerPivot For Excel will support ATOMPub data feeds, meaning that any data source that’s exposed through ADO.NET Data Services will be available. This allows SQL Server PowerPivot For Excel to import data exposes through EDM object models, as well as SharePoint 2010 List data. In addition, SSRS 2008 R2 will expose any report that’s published to the SSRS Report Manager as an ATOMPub feed, permitting the creator of a SQL Server PowerPivot For Excel data model to import report data for further analysis.

SQL Server Master Data Services– Manage Your Most Important Data Assets

SQL Server Master Data Services is a feature of SQL Server 2008 R2 that allows a business to manage its most vital and precious data such as customer and product lists. Often multiple copies of this data are stored in different databases and it’s easy to end up with duplicate but slightly different versions of information such as addresses. MDS allows you to resolve such data inconsistencies. For more information about Master Data Management concepts, reference the whitepaper “The What, Why, and How of Master Data Management” at http://msdn.microsoft.com/en-us/library/bb190163.aspx.

SQL Server MDS includes a SQL Server-based repository, a Web-based front end that allows domain experts and user analysts to manage the data, and a Web Service-based interface for programmability. The feature implements fuzzy matching and domain-specific indexes. The fuzzy indexing algorithms were developed by Microsoft Research.

The programming functionality is included as a set of SQLCLR system functions (e.g. similarity and regular expression style functions) and index building and lookup procedures for multicolumn domain-specific indexes, implemented in the MDM repository database. Because they are written in SQLCLR these functions could be leveraged in future in other parts of the SQL Server product such as SSIS (for bulk matching and data cleansing), SSRS (to produce fuzzy match reports) and IFTS (Integrated Fulltext Search) as an adjunct and alternative to word-stemming based text matching.

Real-time Applications with Data in Flight (StreamInsight)

In most data use cases, data is first stored in a database and analyzed after it’s stored. However, with some use cases (e.g. power grid or heart monitor readings, stock ticker prices) the act of storing the data introduces too much latency for the analysis to be useful. StreamInsight is a Complex Event Processing (CEP) system that allows data to be analyzed and events to be summarized “in flight”.

The StreamInsight produce in built on a service process and input and output providers. Input providers (written in .NET using the StreamInsight APIs) translate input data into a standard set of input stream messages. The streaming applications themselves are also written using a .NET API. Each message type can be registered with one or more LINQ queries (StreamInsight supplies the LINQ provider) for in-flight querying, message aggregation and output through message output providers. Input and output providers for SQL Server data are among those included with the current samples.

Simplify Data Development with Modeling

Modeling with M, EDM, and the Entity Framework

Programmers can be more productive when the transition from model to application is part of the development process. Future versions of SQL Server will support several features to facilitate this process, including a programming language to create models (“M” language) and SQL Server Modeling Services, a repository for storing, querying, and programming with models. The current CTP includes “M” to T-SQL generation for defining and populating models  and the Quadrant tool for creating and compiling “M”-based models and DSLs (domain-specific languages), and browsing and editing the repository.

One near-term usage of this product will be using the “M” language as an eventual replacement of XML-based EDMX model files as part of the generation of the Entity Framework mapping model and code. This will allow greater model scalability compared to using EDMX. Programmers can either use the EDM graphic designer in Visual Studio or code their Entity Data Model in “M”, and the model will be used by the Entity Framework directly at runtime.

In addition to Entity Framework 4.0, which supports the major of object programming patterns used today, Entity Framework will also be more closely integrated with the SQL Server product in the future. A most likely first step is to allow programmers to design reports in SQL Server Reporting Services directly against the conceptual model. The conceptual model could be integrated with other models in the framework as well.

SQL Server Modeling Services Repository

The SQL Server Modeling Services repository resides in a SQL Server database, so it can be populated and queried with either the “M” or T-SQL languages. SQL Server-specific facilities such as Change Data Capture can be used to provide versioning of the repository.  By using the “M” language, programmers can model not only the data and object models, but also applications, workflows and business processes, services, configuration information, and the operating system infrastructure such as SMS (Microsoft Systems Management Service) configuration and installation information. By storing the information is a single repository, it’s possible to define relationships among domain-specific models for a complete picture of business and computer-based processes. SQL Server Modeling Services ships with a Base Domain Library (BDL) to provide a service layer to model-driven applications, including pre-built domains for UML and CLR and tools that support them.

The storage of data and application models in the SQL Server Modeling Services Repository, along with code generation to make repository information synergistic with data applications, will assist in recording and using models to make SQL Server Modeling Services the provider of a single, integrated, auditable, version of truth for the organization’s data models, application service models, configuration information, software management, and business processes.

Conclusion

As you can see programmers use .NET as the underlying substrate for extending and customizing all parts of the SQL Server family including deep support in SQL Server Analysis Services, Integration Services, and Reporting Services. In future SQL Server will evolve into a database that not only integrates with the relational model of data but provides direct support of the conceptual model as well.

The current .NET data access APIs are always based on a provider model that defines a base functionality with extensibility points that allow database-specific functionality to be fit into the model. A SQL Server implementation of each client API is able to serve as a reference implementation and also as a model to illustrate provider extensibility. SQL Server carries the data models into the database itself, enabling in-database .NET programming and extensibility.  

For more information:

http://www.microsoft.com/sqlserver/ : SQL Server Web site

http://technet.microsoft.com/en-us/sqlserver/ : SQL Server TechCenter

http://msdn.microsoft.com/en-us/sqlserver/ : SQL Server DevCenter