Introduction
In the previous chapters, we discussed the database development activities
involved during the systems development life cycle:
- enterprise modeling,
- conceptual data modeling,
- logical and physical database design.
We are now at the last phase of the system development life cycle: implementation.
At the end of implementation, we expect a functioning system that meets users’
information requirements. In this chapter, we review developments in multi-user
database management environments that have led to the development of various
client/server strategies for data processing.
Client Server Systems: A networked computing model that distributes
processes between clients (front ends) and servers (back ends), which supply
the requested services. In a database system, the database generally resides
on a server that processes the DBMS. The clients may process the application
systems or request services from another server that holds the application
programs.
Client/Server Architectures
The server client/server architechtures that have evolved can be distinguished
by the distribution of application logic components across clients and servers.
There are three components of the application logic:
- the input/output (I/O) or presentation logic component:
responsible for formatting and presenting data on the user's screen or other
output device, and for managing input from a keyboard or other input device.
- the processing logic component: which handles:
- data processing logic: e.g.: data validation and identification of
processing errors,
- business rules logic
- data management logic: which identifies the data necessary for processing
a transaction or query
- the storage logic component: which is responsible for
data storage and retrieval from the physical storage devices associated
with the application. Activities of a DBMS occur in the storage logic component
File Server Architecture
The first client/server architectures developed were file servers.
In a basic file/server environment, all data manipulation occurs at the workstations
where data is requested: the client handles the presentation logic, processing
logic, and much of the storage logic (that part associated with the DBMS).
One or more file servers that manage file operations are attached to the LAN,
and are shared by the clients.
File server: A device that manages file operations and is shared
by each of the client PCs attached to the LAN.
Fat Client: A client PC that is responsible for processing logic,
extensive application and business rules logic and many DBMS functions.
Limitations of File Servers
- Cosiderable data movement is generated across the network
- Each client network must devote memory to a full version of the DBMS;
also, each client must be rather powerful to provide suitable response times
- The DBMS copy in each workstation must manage the shared database integrity;
each application must recognize, for example, locks and take care to initiate
the proper locks. Hence, increasing the work of application programmers.
Database Server (Two-Tier) Architectures
Database server: A computer that is responsible for database
storage, access, and processing in a client/server environment. Sometimes
used to describe a two-tier client/server environment.
In two-tiered approaches, the client workstation is reponsible for managing
the user interface, including presentation logic, data processing logic, and
business rules logic; the database server is responsible
for database storage, access, and processing. Hence, LAN traffic is reduced
because only those records that match the requested criteria are transmitted
to the client, rather than entire data files. Other advantages:
- Only the database servers requires processing power adequate to handle
the database; database server can be tuned to optimize data-processing performance.
Clients do not generally have to be as powerful.
- User authorization, integrity checking, data dictionary maintenance, query
and update processing are all performed in one location: the database server.
- Possible use of stored procedures.
Three-Tier Architectures
Three-tier architecture: A client/server configuration that
includes three layers: a client layer and two server layers. While the nature
of the server layers differ, common configuration contains an application
server.
Application server: A computer than stores application programs
(processing logic and business rules logic).
In some three-tier client architectures, most application code is stored
on the application server, and most business processing occurs on the application
server rather than on the client workstation or database server, resulting
in a thin client
Thin client: a PC configured for handling user interfaces and
some application processing, usually with no or limited local data storage.
The three-tier architecture provides the following advantages:
- Scalability. For example, the middle tier (application server) can be
used to reduce the load on the database server by using a transaction processing
(TP) monitor to reduce the number of connections to a server, and additional
application servers can be added to distribute application processing. (A
TP monitor is a proggram that controls data transfer between clients and
servers in order to provide a consistence environment for on-line transacetion
processing (OLTP).
- Technological flexibility. It is easier to change DBMS engines with a
three-tier architechture. The middle tier can even be moved to a different
platform...
- Lower long-term costs.
- Better match of systems to business needs
- Improved customer service. Multiple interfaces on the different clients
can access the same business processes
- Competitive advantage.
- Reduced risk
Challenges associated with three- (and n-) tier architectures are enumerated
onp. 327-328
Partitioning an Application
Obviously there is no one optimal client/server architecture that is the
best solution for all business problems. In most cases though, presentation
logic resides on the client, where the user interfaces with the system. Processing
logic may be divided across clients and servers. Storage logic usually resides
on the database server, close to the physical location of the data. Data integrity
control activities, such as constraint checking, are typically placed in the
database server; and so are triggers which will always fire when preset conditions
(like commands to insert, delete, or update data) are met.
Application partitioning: The process of assigning portions
of application code to client or server partitions after it is written, in
order to achieve better performance and interoperability (the ability of a
component to function on different platforms).
...
Using Parallel Computer Architectures
...
Using Middleware
Hereunder is a classification system for middleware (based on scalability
and recoverability);
- Asynchronous remote procedure call (RPC). The client request services
but does not wait for a response. The client will typically establish a
point-to-point connection with the server and perform other processing while
waiting for the response. e.g. Email.
- Synchronous remote procedure call (RPC): The requesting system waits for
a response to the request in real time. E.g. online banking system.
- Publish/Subscribe: middleware monitors activity and pushes information
to subscribers; is asynchronous, and usually performs other activities between
notification from the server. E.g. supply an electronic bookstore with keywords
of topics in which we are interested, so that information about newly added
titles with our keywords are automatically forwarded to us.
- Message-oriented middleware (MOM): asynchronized software that sends messages
that are collected and stored until they are acted upon, while the client
continues with other processing. Workflow applications, such as insurance
policy applications can benefit frmo MOM.
- Object request broker (ORB): makes it possible for applications to send
objects and request services in an object-oriented system
- SQL-oriented data access: connecting applications to databases over networks;
has the capability of translating generic SQL into SQL specific to the database.
In client/server systems, database-oriented middleware provides some sort
of application program interface (API) access
to a database.
Application program interface (API): Set of routines that an
application program uses to direct the performance of procedures by the computer's
operating system.
For example, in achieving access to a database, an API calls library routines
that transparently route SQL commands from the front-end client application
to the database server.
Open database connectivity (ODBC) is similar to API, but for Windows-based
client/server applications. It is most useful for accessing relational data,
but not well suited for handling other types of data, such as ISAM files.
Though it is difficult to program and implement, ODBC has been well accepted
because it allow programmers to make connection to almost any vendor's database
without learning proprietary code specific to that database. Microsoft's OLE-DB
adds value to the ODBC standard by providing a single point of access to multiple
databases...
Java Database Connectivity (JDBC) classes can be used to help an applet access
any number of databases without understanding the native features of each
database. JDBC defines a call-level interface (CLI) for Java development and
borrows from ODBC conventions...
Establishing a common language to define the interface between components
and a mechanism to mediate will facilitate developing universal middleware.
The Object Management Grouo (OMG) is an industry coalition that has produced
the Common Object Request Broker Architecture (CORBA), which sets the specification
of object-oriented universal middleware. Microsoft has developed a competing
model, Distributed Component Object Model (DCOM), but CORBA is a more robust
specification because it has been developed to handle many different platforms...
Establishing Client/Server Security
- System-level password security: usernames and passwords (to include guidelines
for password lenght, password naming conventions, frequency of password
changes, etc.)
- Database-level password security
- Secure client/server communication: encryption, transforming readable
data (plain text) into unreadable data (ciphertext)
Client/Server Issues
In order to improve the chances for building a successful client/server application,
certain areas should be carefully addressed:
- Accurate business problem analysis: it is critical to develop a sound
application design and architecture; rather than giving in to the temptation
to pick a technology and then fit the application to it, it is more appropriate
to accurately define the scope of the problem and determine requirements,
and then use that information to select the technology
- Detailed architecture analysis: specifying the details of the client/server
architecture: the client workstations, server(s), network, DBMS, network
infrastructure, the middleware layer, and application development tools
to be used, with a view to ensuring connectivity, compatibility, and interoperability.
- Avoiding tool-driven architectures: again, determine project requirements
before choosing software tools, and not the reverse.
- Achieving appropriate scalability: the decision to implement a multitier
solution to allow scalability should be carefully weight against the costs
of implementing the system and the real needs of the organization
- Appropriate placement of services: the move towards thin clients and fat
servers is always the appropriate solution.Understanding the business
problem intimately should help the architect to distribute the logic appropriately.
- Network analysis: It behooves architects to consider bandwidth capabilities
of the network that the system will use.
- Hidden costs: client/server implementation problems go beyond the analysis,
development, and architecture problems enumrated above. Integrating existing
heterogenous components (hardware, networks, operating systems, DBMSs) into
a proposed system is often not an easy task. Also, trianing is a significant
and recurring expense that can be often overlooked.
Database Access from Client Applications
Partitioning the environment to create two-, three-, and n-tier
architecture means that decisions must be made about the placement of processing
logic. (We note that, generally, storage logic (the database
engine) is handled by the database server, and the presentation
logic is handled by the client.)
Common Logic Distributions
(a) Two-tier client/server environments
| Server |
Storage Logic |
| Client |
Processing Logic |
| Presentation Logic |
Fat Client |
| Server |
Storage Logic |
| Processing Logic |
| Client |
Presentation Logic |
Thin Client |
| Server |
Storage Logic |
| Server & Client |
Processing Logic |
| Client |
Presentation Logic |
Distributed |
(b) Example: a Web-enabled n-tier client/server environmen t
| Database Server |
Storage Logic |
DBMS, OS, SQL, TCP/IP |
| Application Server |
Processing Logic |
App/Server, SQL, TCP/IP, OS |
| Web Server |
Processing Logic |
HTTP, CGI, TCP/IP,OS, App/Server API |
| Client |
Presentation Logic |
Browser, HTTP, TCP/IP, OS |
Using Query-by-Example
Query-by-Example (QBE): A direct manipulation database language
that uses a graphical approach to query construction.
QBE: History and Importance
- Originally developed by Zloof (1977); first available for use with SQL/DS
and DB2 database systems on IBM mainframes
- Paradox: first PC RDBMS completely based on QBE
- considered a highly productive language in which to program
- especially useful for end-user database programming
- complete database applications can be written in QBE, but it is more common
to use QBE for interactive querying or updating of a database
QBE: The Basics
Microsoft Access™ Usability Hierachy
The hierarchy conveys that Access will be usable at all levels of programming
ability and complexity.
- At the lowest level are objects, which allow the creation
tables, queries, forms, and reports without any specific programming knowledge.
- The use of expressions or functions
to perform simple processes such as multiplication of fields, validation
of data, or enforcement of a specific business rule is possible at the next
pyramic level.
- At the next level, users can take advantage of stored modules of Visual
Basic for Applications (VBA) code, called macros, to automate
their applications, again without explicit knowledge of VBA
- At the next level, users can program their own modules of VBA
code to customer tailor their own applications.
- At the top level, Windows API calls to functions or DLLs
written in languages such as C, Java, or Visual Basic that can be used to
write interfaces to other programs and sources of data.
Other QBE topics discussed:
- Selecting qualified records
- Self-join
- Basing a query on another query
- Using SQL pass-through queries
Using ODBC to Link External Tables Stored on a Database Server
Open database connectivity (ODBC) standard: An application programming
interface (API) that provides a common language for application programs to
access and process SQL databases independent of the particular RDBMS that
is accessed. Such RDBMSs are said to be ODBC-compliant.
- The open database connectivity (ODBC) standard was developed in the early
1990s by the X/Open and SQL Access Group committees (Microsoft, basically).
- ODBC is particularly important for Internet applications because it allows
for the development of applications that access different database products.
In order to achieve this capability, ODBC uses the ANSI standard generic
statements (see Ch8).
- Five parameters must be defined in order to establish an ODBC connection:
- Specific ODBC driver needed
- Back-end server name to connect to
- Database name to connect to
- User ID to be granted access to database
- Password of user id
- Additional information may be provided, if desired:
- Data source name (DSN)
- Windows client computer name
- Client application program's executable name
Using JDBC to Link External Tables Stored on a Database Server
- The Java Database Connectivity (JDBC) API enables Java programs to execute
SQL statements and connect to database servers. (Note that while ODBC is
language-independent, JDBC is designed specifically for Java applications.
Java is a good language to use for client/server programming because it
is network-oriented, strong in security, and portable.)
- Oracle has embraced Java, and it appears that Oracle's proprietary language,
PL/SQL, will be replaced by Java to provide the additional programming functionality
needed beyond SQL to build database applications.
- The JDBC standard is similar on concept to Microsoft's ODBC
See sample code to use JDBC to access as JDBC-compliant database on p. 351-353.
Using Visual Basic for Applications (VBA) in Client Applications
Visual Basic for Applications (VBA): The programming language
that accompanies Access 2000 (as well as some other Office software, like
Excel, right?)
Access 2000 allows us to create a prototype of an application including the
database design, menus, forms, and reports. However, there is limit to what
we can accomplish using the Access macros that we have access to. Here are
the reasons why we would want to learn VBA once we have achieved basic familiarity
with Access.
- Complex funtionality (displaying our own error messages, graying out buttons,
etc.) can only be achieve with VBA
- Error handling; relying on macros in a finished application is dangerous
because there is no way for the user to recover is a macro crashes
- Faster execution will be achieved when the application uses VBA rather
than macros; code executes faster than macros
- Maintenance is easier: VBA modules are stored with the forms and reports;
macros are stored separately from their related forms and reports...
- OLE automation can be used more completely
- More programmatic control can be achieved by using VBA. Macros cannot
pass variables as parameters..., and cannot easily control action sequencing
- Reading VBA code is easier than reading macro arguments
With a few exceptions, converting a prototype from macros to VBA modules
for a production version of the application will result in a more robust application
with greater capabilities.
As we already know, the Windows OS and Windows applications are event-driven.
Event-driven: Nonprocedural programming that detect an event
when it occurs and generates an appropriate response to that event.
Programming in an event -driven environment consists of creating objects
and modifying their properties so that they behave as we want them to for
each events that affects them.