-SatyaPrakash
How to Track PL/SQL code Changes in a Database?
When I say how to track PL/SQL code changes, the first question that comes into our mind is why we need to track those changes? The answer follows as, during the development phase, developers will be modifying the existing source code as well as existing database objects (like procedures, functions, packages or triggers). Tracking source code changes can be done using any of the versioning systems. But if we want to track all the changes that have been made to the Database objects during a particular period, it will be very difficult and sometimes developer may forget what modifications he did. Also some times we will be modifying DB objects for testing purpose and forgets to revert the changes made. So now one can agree that tracking DB changes is very important. Let’s see how can we achieve this:
All the PL/SQL objects we write (procedures, functions, packages, triggers etc) will be stored in a table called USER_OBJECTS which will hold Object name, Object type and creation time etc.
And the PL/SQL code for these objects will be stored in ALL_SOURCE table. This table holds Object name, Object type and the PL/SQL code.
Now one can easily think of the solution for tracking DB changes.
It’s simple!!!
• Create a table which holds the modified code for all the DB objects.
• Write a schema level trigger on the current schema. This trigger is responsible for inserting old revisions of PL/SQL code into the new table (created in first step).
That’s it!!! Simple yet powerful!
Let’s look at the following example to understand better.
Example:
1. Create a table called PROJ_OBJECT_HISTORY with the following columns.
a. Change Date
b. Owner (schema name)
c. Name
d. Type
e. Line (line number of the PL/SQL code)
f. Text (PL/SQL code)
Except ‘Change Date’ column remaining will be present in ALL_SOURCE table.
So we can create the new table with this following DDL statement.
CREATE TABLE PROJ_OBJECT_HISTORY (
Change_date DATE,
Owner VARCHAR2(30),
Name VARCHAR2(30),
Type VARCHAR2(12),
Line NUMBER,
Text VARCHAR2(4000)
);
Now the table is ready to use.
2. Let us now create a trigger TRACK_DBOBJECTS_HIST that will populate the old revisions of the PL/SQL code into PROJ_OBJECT_HISTORY table.
CREATE OR REPLACE TRIGGER TRACK_DBOBJECTS_HIST
AFTER CREATE ON < SCHEMA_NAME>.SCHEMA
DECLARE
BEGIN
IF ORA_DICT_OBJ_TYPE in ('PROCEDURE', 'FUNCTION',
'PACKAGE', 'PACKAGE BODY',
'TYPE', 'TYPE BODY')--Mention all the Object types you wish to track.
THEN
INSERT INTO PROJ_OBJECT_HISTORY
SELECT sysdate, all_source.* FROM ALL_SOURCE
WHERE TYPE = ORA_DICT_OBJ_TYPE
AND NAME = ORA_DICT_OBJ_NAME;
END IF;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, SQLERRM);
END;
If you want to log the user who made the changes, you can achieve this by creating one more column in the above mentioned table and adding the OSUSER (from v$session table) in the insert statement of the trigger.
Enable the trigger and start using the DB changes tracking service!!!
Friday, January 22, 2010
Thursday, January 14, 2010
Spring
-Ram Prasad
Why Spring? The fact that a developer has to start the J2EE container for each batch of testing really slows down his development cycle. The development cycle should go code, test, code, test. This pattern has now been replaced with code, wait, test, code, wait, test, code, wait. When the developer is just using transactional sevices ,and not persistent services,remoting services or security EJB is not the way to go. Also in the EJB world, to develop a simple component developer actually has to write several classes—the home interface, the local interface, and the bean itself. In addition, he has to create a deployment descriptor for this bean.
“So is there an easier way?”
EJBs were created to solve complicated things, such as distributed objects and remote transactions. Unfortunately, a good number of enterprise projects do not have this level of complexity but still take on EJB’s burden of multiple Java files and deployment descriptors and heavyweight containers. With EJB, application complexity is high, regardless of the complexity of the problem being solved—even simple applications are unduly complex. With Spring, the complexity of your application is proportional to the complexity of the problem being solved.
Finally Spring was designed with the following beliefs:
- Good design is more important than the underlying technology.
- JavaBeans loosely coupled through interfaces is a good model.
- Code should be easy to test.
Spring is an open-source framework, created by Rod Johnson It was created to address the complexity of enterprise application development. Spring makes it possible to use plain-vanilla JavaBeans to achieve things that were previously only possible with EJBs.
Spring is a lightweight inversion of control and aspect-oriented container framework.
Lightweight in terms of both size and overhead.
Inversion of Control where objects are passively given their dependencies instead of creating or looking for dependent objects for themselves.
Aspect-oriented programming which enables cohesive development by separating application business logic from system services.
Container in the sense that it contains and manages the life cycle and configuration of application objects.. either create one single instance of your bean or produce a new instance every time one is needed based on a configurable prototype.
Framework where application objects are composed declaratively, typically in an XML file. Spring also provides much infrastructure functionality (transaction management, persistence framework integration, etc.), leaving the development of application logic to you.
Spring Modules
The core container
Spring’s core container provides the fundamental functionality of the Spring framework. In this module we’ll find Spring’s BeanFactory, the heart of any Spring-based application. A BeanFactory is an implementation of the factory pattern that applies IoC to separate your application’s configuration and dependency specifications from the actual application code.
Application context module
The core module’s BeanFactory makes Spring a container, but the context module is what makes it a framework. This module extends the concept of Bean-Factory, adding support for internationalization (I18N) messages, application life cycle events, and validation. In addition, this module supplies many enterprise services such as e-mail, JNDI access, EJB integration, remoting, and scheduling. Also included is support for integration with templating frameworks such as Velocity and FreeMarker.
Spring’s AOP module
Spring provides rich support for aspect-oriented programming in its AOP module. This module serves as the basis for developing your own aspects for your Spring-enabled application. To ensure interoperability between Spring and other AOP frameworks, much of Spring’s AOP support is based on the API defined by the AOP Alliance AOP Alliance is an open-source project whose goal is to promote adoption of AOP and interoperability among different AOP implementations by defining a common set of interfaces and components. The Spring AOP module also introduces metadata programming to Spring. Using Spring’s metadata support, you are able to add annotations to your source code that instruct Spring on where and how to apply aspects.
JDBC abstraction and the DAO module
Working with JDBC often results in a lot of boilerplate code that gets a connection, creates a statement, processes a result set, and then closes the connection. Spring’s JDBC and Data Access Objects (DAO) module abstracts away the boilerplate code so that you can keep your database code clean and simple, and prevents problems that result from a failure to close database resources. This module also builds a layer of meaningful exceptions on top of the error messages given by several database servers. In addition, this module uses Spring’s AOP module to provide transaction management services for objects in a Spring application.
Object/relational mapping integration module
For those who prefer using an object/relational mapping (ORM) tool over straight JDBC, Spring provides the ORM module. Spring doesn’t attempt to implement its own ORM solution, but does provide hooks into several popular ORM frameworks, including Hibernate, JDO, and iBATIS SQL Maps. Spring’s transaction management supports each of these ORM frameworks as well as JDBC.
Spring’s web module
The web context module builds on the application context module, providing a context that is appropriate for web-based applications. In addition, this module contains support for several web-oriented tasks such as transparently handling multipart requests for file uploads and programmatic binding of request parameters to your business objects. It also contains integration support with Jakarta Struts.
The Spring MVC framework
Spring comes with a full-featured Model/View/Controller (MVC) framework for building web applications. Although Spring can easily be integrated with other MVC frameworks, such as Struts, Spring’s MVC framework uses IoC to provide for a clean separation of controller logic from business objects. It also allows you to declaratively bind request parameters to your business objects, What’s more, Spring’s MVC framework can take advantage of any of Spring’s other services, such as I18N messaging and validation.
Aspect Oriented Programming
System services such as transaction management and security often find their way into components whose core responsibility is something else. These system services are commonly referred to as cross-cutting concerns because they tend to cut across multiple components in a system.
Problems by spreading these concerns across multiple components are
- The code that implements the systemwide concerns is duplicated across multiple components.
- Your components are littered with code that isn’t aligned with their core functionality.
Spring Alternatives
Before we can use spring for our projects we need to cover what else is out there in the world of J2EE frameworks.
a) Comparing Spring to EJB:
The decision to choose one over the other is not one to be taken lightly. Also, you do not necessarily have to choose only Spring or EJB. Spring can be used to support existing EJBs as well. EJB is a standard which has
Wide industry support—There is a whole host of vendors that are supporting this technology, including industry heavyweights Sun, IBM, Oracle. This is comforting to many companies because they feel that by selecting EJB as their J2EE framework, they are going with a safe choice.
Wide adoption—EJB as a technology is deployed in thousands of companies around the world. As a result, EJB is in the tool bag of most J2EE developers. This means that if a developer knows EJB, they are more likely to find a job. At the same time, companies know that if they adopt EJB, there is an abundance of developers who are capable of developing their applications.Toolability—The EJB specification is a fixed target, making it easy for vendors to produce tools to help developers create EJB applications more quickly and easily.
b) Struts
Important difference is how each handles form input. Typically, when a user is submitting a web form, the incoming data maps to an object in your application. In order to handle form submissions, Struts requires you have ActionForm classes to handle the incoming parameters. This means you need to create a class solely for mapping form submissions to your domain objects. Spring allows you to map form submissions directly to an object without the need for an intermediary, leading to eaiser maintenance. Struts comes with built-in support for declarative form validation. This means you can define rules for validating incoming form data in XML. This keeps validation logic out of your code, where it can be cumbersome and messy. Spring does not come with declarative validation. If you already have an investment in Struts or you just prefer it as your web framework, Spring has a package devoted to integrating Struts with Spring.
c) Persistence Frameworks
There really isn’t a direct comparison between Spring and any persistence framework. As mentioned earlier, Spring does not contain any built-in persistence framework. It has ORM module that integrates other good frameworks with rest of Spring. Spring provides integration points for Hibernate, JDO, OJB, and iBATIS. Spring also provides a very rich framework for writing JDBC. JDBC requires a lot of boilerplate code Spring’s JDBC module handles this boilerplate, allowing you to focus on writing queries and handling the results.
Saturday, January 9, 2010
More on Java Portlets
-Satya Swathi
Java PortalThere are three logical components to consider when developing to the Java Portlet Specification. A portal is an application which aggregates portlet applications together in a presentable format. Beyond merely being a presentation layer, a portal typically allows users to customize their presentation, including what portlet applications to display. A portal can also provide a convenient single sign-on mechanism for users.
Portlet
A portlet is an individual web component that is made accessible to users via a portal interface. Typically, a single portlet generates only a fragment of the markup that a user finds from his/her browser. Users issue requests against a portlet from the portal page. The portal in turn forwards those requests to a portlet container, which manages the lifecycle of a portlet.
Portlet Containers
As we begin to get into some of the concepts behind the Java Portlet Specification, you will see many similarities to J2EE servlets. This is understandable when you realize that portlet applications are essentially extended web applications, a layer on top of servlets if you will. As a result, portlet developers have access to not only portlet-specific objects, but also the underlying servlet constructs.
The Portlet Lifecycle
As stated earlier, it is the job of the portlet container to manage a portlet's lifecycle. Each portlet exposes four lifecycle methods.
The init(PortletConfig config) is called once, immediately after a new portlet instance is created. It can be used to perform startup tasks and is akin to a servlet s init method. PortletConfig represents read-only configuration data, specified in a portlet's descriptor file, portlet.xml(more on this file later). For example, PortletConfig provides access to initialization parameters.
The render(RenderRequest request, RenderResponse response) method follows processAction in the chain of lifecycle methods. Render generates the markup that will be made accessible to the portal user. RenderRequest and RenderResponse methods, also subinterfaces of PortletRequest and PortletResponse, are available during the rendering of a portlet. The way in which the render method generates output may depend on the portlet's current state.
The destroy() method is the last lifecycle method, called just before a portlet is garbage collected and provides a last chance to free up portlet resources.
Portlet Mode and Window State
A portlet container manages a portlet's lifecycle but it also controls two pieces of information that represent a portlet's state, portlet mode and window state.
A portlet s mode determines what actions should be performed on a portlet. View, Edit, and Help are the three standard modes. Optional modes can be specified too however.
The class GenericPortlet, found in the portlet.jar archive file, is a convenience class that implements the render method and defines three empty methods, doView, doEdit, and doHelp. Your subclass of GenericPortlet can implement any of these methods that you desire. When the container invokes the render method, render will call one of these methods, based upon portlet mode. For example, doEdit might prepare an HTML form to customize a portlet. doView will help generate display markup, and doHelp might build a portlet's help screen.
Window State determines how much content should appear in a portlet. There are three standard window states defined in the specification: Normal, Minimized, and Maximized. Normal will display the portlet's data in the amount of window space defined by the portal application, Maximized will present only that portlet in the user's window, and Minimized may perhaps display a single line of text or nothing at all.
Window state and portlet mode are programmatically accessible throughout the life of a portlet application. They can be read from any portlet API method, such as render. A portlet's processAction method has the ability to modify their values.
PortletPreferences and PortletSession
The specification defines a few different methods that involve storing user information, either permanently or for the length of a client's session. The two most important are PortletPreferences and PortletSession.
PortletPreferences is an object that can be used to store persistent data for a portlet user. PortletPreference stores pairs of names and values that are retrievable during the render phase through a getValue method. In processAction, values can be set and saved through the setValue and store methods, respectively. Optionally, you may include a PreferencesValidator object to check values prior to persisting preferences (via it's validate method). Default preference values may be specified in a portlet's descriptor file.
In servlet programming, HttpSession lets you to store session-specific data. Additionally, the Java Portlet Specification defines the PortletSession interface for storing information within a user's session. PortletSession identifies two scopes, PORTLET_SCOPE and APPLICATION_SCOPE. In portlet session scope, you can store data specific to a single portlet instance, within a user's session. The application session scope can store data across all of a user's portlets within the same session.
Few More Features
Below is a list of some of the additional features defined by the Java Portlet Specification.
- An include mechanism for incorporating servlets and JSP pages into your portlets. A PortletRequestDistpacher accomplishes this, much the same way a RequestDispatcher would in the servlet world. This allows your portlet methods to act as controllers, redirecting work to specified servlets and JSPs.
- A way to create non-standard portlet extensions, such as custom portlet modes. The PortalContext object can be used to query information about a portal vendor s supported extensions. Then, portlet developers could decide if they would like to take advantage of any of those non-standard features.
- A taglib is provided for use in a portlet's JSP pages. These tags provide a way to construct the URLs a portlet needs to refer back to itself. The taglib also provides a way to include all the necessary portlet-specific classes a JSP will need.
- The ability to manage portlet security, like designating a portlet to run only over HTTPS.
- A Method for accessing ResourceBundles for portlet localization.
- The option of declaratively specifying an expiration cache for a portlet
Packaging a Java Portlet
The Java Portlet Specification allows a portlet or portlets to be packaged as a .war file for deployment to a J2EE application server. Just like a .war file used to deploy a typical J2EE web application, it contains a WEB-INF/web.xml file to configure the application context. However, with a portlet application, the WEB-INF folder must also contain a portlet.xml file. The portlet.xml file is a descriptor file, containing configuration details about all bundled portlets in the .war file.
The following listing shows a simple example of a portlet.xml file. Note how many of the previously-described constructs (portlet mode, preferences, etc.) are defined in this file. Here, is a portlet.xml example
<portlet-app>
<portlet>
<portlet-name>MyPortlet</portlet-name>
<portlet-class>com.abc.portlet.MyPortlet</portlet-class>
<init-param>
--Init param, available in portlet's PortletConfig instance.
<name>view-to-present
<value>/portlet/MyPortlet/startup_view.jsp</value>
</init-param>
<expiration-cache>300</expiration-cache>
--Default expiration for portlet cache (5 minutes)
<supports>
<mime-type>text/html</mime-type>
--Portlet supports HTML markup
<portlet-mode>VIEW</portlet-mode>
--MyPortlet supports modes view and edit
<portlet-mode>EDIT</portlet-mode>
</supports>
<resource-bundle>com.abc.portlet.MyResourceBundle</resource-bundle>
<portlet-preferences>
<preference>
<name>Country1</name>
--PortletPreferences name/value pairs.
<value>USA</value>
</preference>
<preference>
<name>Country2</name>
<value>Japan</value>
</preference>
--A PreferencesValidator will check any preferences set.
<preferences-validator>com.abc.portlet.validate.CountryValidator</preferences-validator>
</portlet-preferences>
</portlet>
</portlet-app>
Application server: A system that provides the execution environment that is at the core of network computing or web-based architectures, providing a full set of services.
Portal server: an application server running a portal software or a portal application
Open sources portals
GridSphere, Pluto, uPortal, Jetspeed, jPorta, Cocoon, oPortal, CHEF, Sakai, Liferay, eXo, Redhat Portal Server, Gluecode Portal Foundation Server, Lutece
AJAX
-Sudhamayi
Before actually going into AJAX, let’s see what does Rich internet applications provide us with. Rich Internet applications (RIAs) are web applications that have most of the characteristics of desktop applications like- The program responds intuitively and quickly
- Program gives a user meaningful feedback instantly
- Things happen naturally. A simple event like a mouseover could trigger an event.
What is AJAX?
AJAX is one of the RIA frameworks. AJAX stands for Asynchronous JavaScript and XML. AJAX is a group of interrelated web development techniques used on the client-side to create interactive web applications. AJAX is a technique for making the user interfaces of web applications more responsive and interactive
Ajax is not a technology in itself, but is a term that describes a "new" approach to using a number of existing technologies together, including: HTML or XHTML, Cascading Style Sheets, JavaScript, The Document Object Model, XML, XSLT, and the XMLHttpRequest object combination of which makes the application faster and more responsive
Why do we need AJAX?
With AJAX user experiences intuitive and natural user interaction. No clicking is required. Simple mouse movement is a sufficient event trigger. Here "Partial screen update" replaces the "click, wait, and refresh” user interaction model. Only user interface elements with new information are updated rest of them remaining uninterrupted. AJAX is data-driven as opposed to page driven. UI is handled in the client while data is provided by the server.
Asynchronous communication replaces synchronous request/response model. A user can continue to use the application while the client program requests information from the server in the background. As most of the part is written with Javascript it is easily understandable.
Comparison with Classic Web application
As seen in the picture, the communication with the server would be asynchronous. Meaning , if a request is sent to the server for doing some select operation on the database, then the user can still continue filling the rest of his form. Meanwhile the server fetches the data from the backend database server and updates the frontend display. User need not wait for the all the data to be fetched. Similarly, if a form with a lot of fields is there then we need not wait until all the fields are filled, for submitting the form and for subsequent validation. Immediately after filling a form field it could be sent to the server and validated.
Technologies used in AJAX
- Javascript:
- Loosely typed scripting language
- JavaScript function is called when an event in a page occurs
- Glue for the whole AJAX operation
- DOM
- API for accessing and manipulating structured documents
- Represents the structure of XML and HTML documents
- CSS
- Allows for a clear separation of the presentation style from the content.May be changed programmatically by Javascript.
- XMLHttpRequest
- JavaScript object that performs asynchronous interaction with the server
- XMLHttpRequest is a JavaScript object
- It is adopted by modern browsers like Mozilla , Firefox, Safari and Opera
- Communicates with a server via standard HTTP
- GET/POST
- This object works in the background for performing asynchronous communication with the backend server. This does not interrupt user operation.
- A client event occurs (like mouse click or an on change event)
- An XMLHttpRequest object is created
- The XMLHttpReques object is configured
- The XMLHttpRequest object makes an asynchronous request to the server side component
- The server side component returns an XML document containing the result
- The XMLHttpReques object calls the callback() function and processes the result
- The HTML DOM is updated and the intended output is displayed.
PROS
- Most viable Rich Internet Application technology so far
- Tremendous industry momentum
- Several toolkits and frameworks are emerging. Ex: SPRY, DWR, Thinwire-Open source Ajax-RIA framework, Taconite-Ajax framework, Buffalo Ajax framework,Salto, Tigermouse-Ajax framework for php, Zaxas etc.
- No need to download code & no plug-in required
- Easy to learn
- Still browser incompatibility
- JavaScript is hard to maintain and debug
- Real-time server-side input form data validation
- Removes the need to have validation logic at both client side for user responsiveness and at server side for security and other reasons.
- Auto-Completion: Email address, name, or city name may be auto-completed as the user types
- Master detail operation: Based on a user selection, more detailed information can be fetched and displayed
- Advanced GUI widgets and controls: Controls such as tree controls, menus, and progress bars may be provided that do not require page refreshes
- Refreshing data: HTML pages may poll data from a server for up-to-date data such as scores, stock quotes, weather, or application-specific data.
- Google maps - http://maps.google.com/
- NetFlix - http://www.netflix.com/BrowseSelection?lnkctr=nmhbs
- Gmail - http://gmail.com/
- Yahoo maps - http://maps.yahoo.com/
Labels:
Ajax,
asynchronous,
RIA
Friday, January 8, 2010
Grid Computing
-Dhanapathi
Data is growing exponentially. It is growing at the rate of approximately 45GB per person. This is doubling every five years. Data centres in US are consuming 100 billion KW of electricity per year. Finding better ways to store and manage data is not just enough. We need greener ways.Data centres are power hungry. We need solutions that ensure lesser power usage, less cooling, less water usage and less management space. We need more efficient, more flexible and more environment friendly solution i.e. the one that consume less no. of resources.
The solution to the problem is Grid Computing. Grid computing is the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations. Sever Virtualization and Clustering allows applications to get a shared pool of IT resources that can be distributed and redistributed.
What is Grid Computing
Grid Computing refers to the automated sharing and coordination of the collective processing power of many widely scattered, robust computers that are not normally centrally controlled, and that are subject to open standards.
Grid Computing allows the virtualization of distributed computing and data resources such as processing, network bandwidth and storage capacity to provide a unique system image, granting users and applications access to vast IT capabilities.
How does it Work
User enters the Grid through a Software Interface. For the user to use the Grid he should have proper authentication and authorization privileges. System performs Security Validations. Once the System finds that the user is authenticated then the System takes the user request to Resource Broker. The Resource Broker checks from the catalogue of Information Service for the resource that has the ability to perform the job requested by user. The Resource Broker directs the job to corresponding resource.
Characteristics
- Distributed System: - Several computers are connected through a network infrastructure.
- High Security: - Strict Authentication and encryption are needed.
- System Management: - Sophisticated management to keep the system running, monitoring, correct the failures and so on.
- Site Autonomy: - Autonomy of the users must be obeyed. That is users should be allowed to work on their interested system.
CPU-scavenging, cycle-scavenging, cycle stealing or shared computing creates a “grid” from the unused resources in a network of participants (whether worldwide or internal to an organization). Typically this technique uses desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices. Volunteer computing projects use the CPU scavenging model almost exclusively.
In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.
Key Problems
- Security- How to authenticate and authorize users in a decentralized ways. How to ensure the safety of data that is transmitted.
- Resource Management:- How to manage the resources with out a Central Entity.
- Data Management:- How to copy data to other sites. Since the data goes through Internet, it must be ensured that Data is transmitted at a faster pace.
- Information Services- How to retrieve data from a particular site. How to find out the apt site.
Grid computing is where more than one computer coordinates to solve a problem together. Often used for problems involving a lot of number crunching, which can be easily parallelisable.
Cloud computing is where an application doesn't access resources it requires directly, rather it accesses them through something like a service. So instead of talking to a specific hard drive for storage, and a specific CPU for computation, etc. it talks to some service that provides these resources. The service then maps any requests for resources to its physical resources, in order to provide for the application. Usually the service has access to a large amount of physical resources, and can dynamically allocate them as they are needed.
In this way, if an application requires only a small amount of some resource, say computation, then the service only allocates a small amount, say on a single physical CPU (that may be shared with some other application using the service). If the application requires a large amount of some resource, then the service allocates that large amount, say a grid of CPUs. The application is relatively oblivious to this, and all the complex handling and coordination is performed by the service, not the application. In this way the application can scale well.
For example a web site written "on the cloud" may share a server with many other web sites while it has a low amount of traffic, but may be moved to its own dedicated server, or grid of servers, if it ever has massive amounts of traffic. This is all handled by the cloud service, so the application shouldn't have to be modified drastically to cope. A cloud would usually use a grid. A grid is not necessarily a cloud or part of a cloud.
Conclusion
Grid Computing is a strategy to save money and become greener at the same time. Grid Computing is a process that can be considered as a Mainstream Conveyor belt of Modern Engineering Development and Design development. Grid computing can be used in the areas of Designing Cars, Planes, Computers, Chips, New Drugs, Oil and Gas Field, Human Gnome study, Predicting the Climate changes, Calculation of risk in Financial Investment risks. Using Grid Computing as a part of Cloud Computing will increase the system ability and performance.
Labels:
Cloud Computing,
Grid Computing