Friday, January 22, 2010

Tracking DB Changes

                                                                                                    -SatyaPrakash

How to Track PL/SQL code Changes in a Database?

When I say how to track PL/SQL code changes, the first question that comes into our mind is why we need to track those changes? The answer follows as, during the development phase, developers will be modifying the existing source code as well as existing database objects (like procedures, functions, packages or triggers). Tracking source code changes can be done using any of the versioning systems. But if we want to track all the changes that have been made to the Database objects during a particular period, it will be very difficult and sometimes developer may forget what modifications he did. Also some times we will be modifying DB objects for testing purpose and forgets to revert the changes made. So now one can agree that tracking DB changes is very important. Let’s see how can we achieve this:

All the PL/SQL objects we write (procedures, functions, packages, triggers etc) will be stored in a table called USER_OBJECTS which will hold Object name, Object type and creation time etc.
And the PL/SQL code for these objects will be stored in ALL_SOURCE table. This table holds Object name, Object type and the PL/SQL code.

Now one can easily think of the solution for tracking DB changes.
It’s simple!!!

• Create a table which holds the modified code for all the DB objects.
• Write a schema level trigger on the current schema. This trigger is responsible for inserting old revisions of PL/SQL code into the new table (created in first step).

That’s it!!! Simple yet powerful!

Let’s look at the following example to understand better.

Example:

1. Create a table called PROJ_OBJECT_HISTORY with the following columns.

a. Change Date
b. Owner (schema name)
c. Name
d. Type
e. Line (line number of the PL/SQL code)
f. Text (PL/SQL code)

Except ‘Change Date’ column remaining will be present in ALL_SOURCE table.
So we can create the new table with this following DDL statement.

CREATE TABLE PROJ_OBJECT_HISTORY (
Change_date DATE,
Owner VARCHAR2(30),
Name VARCHAR2(30),
Type VARCHAR2(12),
Line NUMBER,
Text VARCHAR2(4000)
);

Now the table is ready to use.


2. Let us now create a trigger TRACK_DBOBJECTS_HIST that will populate the old revisions of the PL/SQL code into PROJ_OBJECT_HISTORY table.

CREATE OR REPLACE TRIGGER TRACK_DBOBJECTS_HIST
AFTER CREATE ON < SCHEMA_NAME>.SCHEMA
DECLARE
BEGIN
IF ORA_DICT_OBJ_TYPE in ('PROCEDURE', 'FUNCTION',
'PACKAGE', 'PACKAGE BODY',
'TYPE', 'TYPE BODY')--Mention all the Object types you wish to track.
THEN

INSERT INTO PROJ_OBJECT_HISTORY
SELECT sysdate, all_source.* FROM ALL_SOURCE
WHERE TYPE = ORA_DICT_OBJ_TYPE
AND NAME = ORA_DICT_OBJ_NAME;
END IF;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, SQLERRM);
END;


If you want to log the user who made the changes, you can achieve this by creating one more column in the above mentioned table and adding the OSUSER (from v$session table) in the insert statement of the trigger.

Enable the trigger and start using the DB changes tracking service!!!

Thursday, January 14, 2010

Spring

-Ram Prasad
Why Spring?                    
                       The fact that a developer has to start the J2EE container for each batch of testing really slows down his development cycle. The development cycle should go code, test, code, test. This pattern has now been replaced with code, wait, test, code, wait, test, code, wait. When the developer is just using transactional sevices ,and not persistent services,remoting services or security  EJB is not the way to go. Also in the EJB world, to develop a simple component developer actually has to write several classes—the home interface, the local interface, and the bean itself. In addition, he has to create a deployment descriptor for this bean.

“So is there an easier way?”

EJBs were created to solve complicated things, such as distributed objects and remote transactions. Unfortunately, a good number of enterprise projects do not have this level of complexity but still take on EJB’s burden of multiple Java files and deployment descriptors and heavyweight containers. With EJB, application complexity is high, regardless of the complexity of the problem being solved—even simple applications are unduly complex. With Spring, the complexity of your application is proportional to the complexity of the problem being solved.
Finally Spring was designed with the following beliefs:
  • Good design is more important than the underlying technology.
  • JavaBeans loosely coupled through interfaces is a good model.
  • Code should be easy to test.
What is Spring ?
                        Spring is an open-source framework, created by Rod Johnson It was created to address the complexity of enterprise application development. Spring makes it possible to use plain-vanilla JavaBeans to achieve things that were previously only possible with EJBs.
             Spring is a lightweight inversion of control and aspect-oriented container framework.

Lightweight in terms of both size and overhead.
Inversion of Control where objects are passively given their dependencies instead of creating or looking for dependent objects for themselves.
Aspect-oriented programming which enables cohesive development by separating application business logic from system services.
Container in the sense that it contains and manages the life cycle and configuration of application objects.. either create one single instance of your bean or produce a new instance every time one is needed based on a configurable prototype.
Framework where application objects are composed declaratively, typically in an XML file. Spring also provides much infrastructure functionality (transaction management, persistence framework integration, etc.), leaving the development of application logic to you.

Spring Modules
The core container
           Spring’s core container provides the fundamental functionality of the Spring framework. In this module we’ll find Spring’s BeanFactory, the heart of any Spring-based application. A BeanFactory is an implementation of the factory pattern that applies IoC to separate your application’s configuration and dependency specifications from the actual application code.

Application context module
             The core module’s BeanFactory makes Spring a container, but the context module is what makes it a framework. This module extends the concept of Bean-Factory, adding support for internationalization (I18N) messages, application life cycle events, and validation. In addition, this module supplies many enterprise services such as e-mail, JNDI access, EJB integration, remoting, and scheduling. Also included is support for integration with templating frameworks such as Velocity and FreeMarker.

Spring’s AOP module
              Spring provides rich support for aspect-oriented programming in its AOP module. This module serves as the basis for developing your own aspects for your Spring-enabled application. To ensure interoperability between Spring and other AOP frameworks, much of Spring’s AOP support is based on the API defined by the AOP Alliance AOP Alliance is an open-source project whose goal is to promote adoption of AOP and interoperability among different AOP implementations by defining a common set of interfaces and components. The Spring AOP module also introduces metadata programming to Spring. Using Spring’s metadata support, you are able to add annotations to your source code that instruct Spring on where and how to apply aspects.


JDBC abstraction and the DAO module
                   Working with JDBC often results in a lot of boilerplate code that gets a connection, creates a statement, processes a result set, and then closes the connection. Spring’s JDBC and Data Access Objects (DAO) module abstracts away the boilerplate code so that you can keep your database code clean and simple, and prevents problems that result from a failure to close database resources. This module also builds a layer of meaningful exceptions on top of the error messages given by several database servers.  In addition, this module uses Spring’s AOP module to provide transaction management services for objects in a Spring application.

Object/relational mapping integration module
                       For those who prefer using an object/relational mapping (ORM) tool over straight JDBC, Spring provides the ORM module. Spring doesn’t attempt to implement its own ORM solution, but does provide hooks into several popular ORM frameworks, including Hibernate, JDO, and iBATIS SQL Maps. Spring’s transaction management supports each of these ORM frameworks as well as JDBC.

Spring’s web module
                      The web context module builds on the application context module, providing a context that is appropriate for web-based applications. In addition, this module contains support for several web-oriented tasks such as transparently handling multipart requests for file uploads and programmatic binding of request parameters to your business objects. It also contains integration support with Jakarta Struts.

The Spring MVC framework
                    Spring comes with a full-featured Model/View/Controller (MVC) framework for building web applications. Although Spring can easily be integrated with other MVC frameworks, such as Struts, Spring’s MVC framework uses IoC to provide for a clean separation of controller logic from business objects. It also allows you to declaratively bind request parameters to your business objects, What’s more, Spring’s MVC framework can take advantage of any of Spring’s other services, such as I18N messaging and validation.

Aspect Oriented Programming
                   System services such as transaction management and security often find their way into components whose core responsibility is something else. These system services are commonly referred to as cross-cutting concerns because they tend to cut across multiple components in a system.


Problems by spreading these concerns across multiple components are
  • The code that implements the systemwide concerns is duplicated across multiple components.
  • Your components are littered with code that isn’t aligned with their core functionality.
We can think of aspects as blankets that cover many components of an application.

Spring Alternatives
               Before we can use spring for our projects we need to cover what else is out there in the world of J2EE frameworks.
 a) Comparing Spring to EJB:
The decision to choose one over the other is not one to be taken lightly. Also, you do not necessarily have to choose only Spring or EJB. Spring can be used to support existing EJBs as well. EJB is a standard which has
Wide industry support—There is a whole host of vendors that are supporting this technology, including industry heavyweights Sun, IBM, Oracle. This is comforting to many companies because they feel that by selecting EJB as their J2EE framework, they are going with a safe choice.
             Wide adoption—EJB as a technology is deployed in thousands of companies around the world. As a result, EJB is in the tool bag of most J2EE developers. This means that if a developer knows EJB, they are more likely to find a job. At the same time, companies know that if they adopt EJB, there is an abundance of developers who are capable of developing their applications.
            Toolability—The EJB specification is a fixed target, making it easy for vendors to produce tools to help developers create EJB applications more quickly and easily.


 b) Struts 
                 Important difference is how each handles form input. Typically, when a user is submitting a web form, the incoming data maps to an object in your application. In order to handle form submissions, Struts requires you have ActionForm classes to handle the incoming parameters. This means you need to create a class solely for mapping form submissions to your domain objects. Spring allows you to map form submissions directly to an object without the need for an intermediary, leading to eaiser maintenance. Struts comes with built-in support for declarative form validation. This means you can define rules for validating incoming form data in XML. This keeps validation logic out of your code, where it can be cumbersome and messy. Spring does not come with declarative validation. If you already have an investment in Struts or you just prefer it as your web framework, Spring has a package devoted to integrating Struts with Spring.

  c) Persistence Frameworks
There really isn’t a direct comparison between Spring and any persistence framework. As mentioned earlier, Spring does not contain any built-in persistence framework. It has ORM module that integrates other good frameworks with rest of Spring. Spring provides integration points for Hibernate, JDO, OJB, and iBATIS. Spring also provides a very rich framework for writing JDBC. JDBC requires a lot of boilerplate code Spring’s JDBC module handles this boilerplate, allowing you to focus on writing queries and handling the results.

Saturday, January 9, 2010

More on Java Portlets

-Satya Swathi
Java Portal
                          There are three logical components to consider when developing to the Java Portlet Specification. A portal is an application which aggregates portlet applications together in a presentable format. Beyond merely being a presentation layer, a portal typically allows users to customize their presentation, including what portlet applications to display. A portal can also provide a convenient single sign-on mechanism for users.

Portlet

                         A portlet is an individual web component that is made accessible to users via a portal interface. Typically, a single portlet generates only a fragment of the markup that a user finds from his/her browser. Users issue requests against a portlet from the portal page. The portal in turn forwards those requests to a portlet container, which manages the lifecycle of a portlet.


Portlet Containers
A portlet container sits between a portal and its portlets. A portlet container provides the run-time environment to portlets, much in the same way a servlet container provides the environment for servlets. The portlet container manages portlets by invoking their lifecycle methods. The container forwards requests to the appropriate portlet. When a portlet generates a response, the portlet container sends it to the portal to be rendered to the user. It should be noted that the distinction between a portal and portlet container is a logical one. There may be one physical component.

As we begin to get into some of the concepts behind the Java Portlet Specification, you will see many similarities to J2EE servlets. This is understandable when you realize that portlet applications are essentially extended web applications, a layer on top of servlets if you will. As a result, portlet developers have access to not only portlet-specific objects, but also the underlying servlet constructs.


The Portlet Lifecycle
                              As stated earlier, it is the job of the portlet container to manage a portlet's lifecycle. Each portlet exposes four lifecycle methods.

 The init(PortletConfig config) is called once, immediately after a new portlet instance is created. It can be used to perform startup tasks and is akin to a servlet s init method. PortletConfig represents read-only configuration data, specified in a portlet's descriptor file, portlet.xml(more on this file later). For example, PortletConfig provides access to initialization parameters.

The processAction(ActionRequest request, ActionResponse response) method is called in response to a user action such as clicking a hyperlink or submitting a form. In this method, a portlet may invoke business logic components, such as JavaBeans, to accomplish its goal. The ActionRequest and ActionResponse Interfaces are subinterfaces of PortletRequest and PortalRequest. In processAction, a portlet may modify its own state as well as persistent information about a portlet.

The render(RenderRequest request, RenderResponse response) method follows processAction in the chain of lifecycle methods. Render generates the markup that will be made accessible to the portal user. RenderRequest and RenderResponse methods, also subinterfaces of PortletRequest and PortletResponse, are available during the rendering of a portlet. The way in which the render method generates output may depend on the portlet's current state.
The destroy() method is the last lifecycle method, called just before a portlet is garbage collected and provides a last chance to free up portlet resources.

Portlet Mode and Window State
                         A portlet container manages a portlet's lifecycle but it also controls two pieces of information that represent a portlet's state, portlet mode and window state.
A portlet s mode determines what actions should be performed on a portlet. View, Edit, and Help are the three standard modes. Optional modes can be specified too however.
                    The class GenericPortlet, found in the portlet.jar archive file, is a convenience class that implements the render method and defines three empty methods, doView, doEdit, and doHelp. Your subclass of GenericPortlet can implement any of these methods that you desire. When the container invokes the render method, render will call one of these methods, based upon portlet mode. For example, doEdit might prepare an HTML form to customize a portlet. doView will help generate display markup, and doHelp might build a portlet's help screen.
                      Window State determines how much content should appear in a portlet. There are three standard window states defined in the specification: Normal, Minimized, and Maximized. Normal will display the portlet's data in the amount of window space defined by the portal application, Maximized will present only that portlet in the user's window, and Minimized may perhaps display a single line of text or nothing at all.
                      Window state and portlet mode are programmatically accessible throughout the life of a portlet application. They can be read from any portlet API method, such as render. A portlet's processAction method has the ability to modify their values.

PortletPreferences and PortletSession
                            The specification defines a few different methods that involve storing user information, either permanently or for the length of a client's session. The two most important are PortletPreferences and PortletSession.
                           PortletPreferences is an object that can be used to store persistent data for a portlet user. PortletPreference stores pairs of names and values that are retrievable during the render phase through a getValue method. In processAction, values can be set and saved through the setValue and store methods, respectively. Optionally, you may include a PreferencesValidator object to check values prior to persisting preferences (via it's validate method). Default preference values may be specified in a portlet's descriptor file.
In servlet programming, HttpSession lets you to store session-specific data. Additionally, the Java Portlet Specification defines the PortletSession interface for storing information within a user's session. PortletSession identifies two scopes, PORTLET_SCOPE and APPLICATION_SCOPE. In portlet session scope, you can store data specific to a single portlet instance, within a user's session. The application session scope can store data across all of a user's portlets within the same session.

Few More Features
             Below is a list of some of the additional features defined by the Java Portlet Specification.
  • An include mechanism for incorporating servlets and JSP pages into your portlets. A PortletRequestDistpacher accomplishes this, much the same way a RequestDispatcher would in the servlet world. This allows your portlet methods to act as controllers, redirecting work to specified servlets and JSPs.
  • A way to create non-standard portlet extensions, such as custom portlet modes. The PortalContext object can be used to query information about a portal vendor s supported extensions. Then, portlet developers could decide if they would like to take advantage of any of those non-standard features.
  • A taglib is provided for use in a portlet's JSP pages. These tags provide a way to construct the URLs a portlet needs to refer back to itself. The taglib also provides a way to include all the necessary portlet-specific classes a JSP will need. 
  • The ability to manage portlet security, like designating a portlet to run only over HTTPS. 
  • A Method for accessing ResourceBundles for portlet localization. 
  • The option of declaratively specifying an expiration cache for a portlet 
                                            Also, when discussing the Java Portlet Specification it is relevant to mention its relationship to another standard, the Web Services for Remote Portlets, or WSRP. WSRP is a standard for accessing content generated by remote portlets, possibly portlets using a technology other than J2EE. Though separate specifications, the Java Portlet Specification and WSRP were each developed with the other's standards in mind. As a result, the two specifications share many of the same concepts, like portlet modes and window states, allowing a portal to use both quite effectively.

Packaging a Java Portlet
                            The Java Portlet Specification allows a portlet or portlets to be packaged as a .war file for deployment to a J2EE application server. Just like a .war file used to deploy a typical J2EE web application, it contains a WEB-INF/web.xml file to configure the application context. However, with a portlet application, the WEB-INF folder must also contain a portlet.xml file. The portlet.xml file is a descriptor file, containing configuration details about all bundled portlets in the .war file.
                                      The following listing shows a simple example of a portlet.xml file. Note how many of the previously-described constructs (portlet mode, preferences, etc.) are defined in this file. Here, is a portlet.xml example

<portlet-app>

  <portlet>
    <portlet-name>MyPortlet</portlet-name>
    <portlet-class>com.abc.portlet.MyPortlet</portlet-class>
    <init-param>
--Init param, available in portlet's PortletConfig instance.
<name>view-to-present
<value>/portlet/MyPortlet/startup_view.jsp</value>
    </init-param>
    <expiration-cache>300</expiration-cache>
--Default expiration for portlet cache (5 minutes)
   <supports>
<mime-type>text/html</mime-type>  
 --Portlet supports HTML markup
       <portlet-mode>VIEW</portlet-mode>  
--MyPortlet supports modes view and edit
       <portlet-mode>EDIT</portlet-mode>
    </supports>
    <resource-bundle>com.abc.portlet.MyResourceBundle</resource-bundle>
    <portlet-preferences>
       <preference>
          <name>Country1</name>  
--PortletPreferences name/value pairs.
          <value>USA</value>
       </preference>
       <preference>
          <name>Country2</name>              
          <value>Japan</value>
       </preference>
--A PreferencesValidator will check any preferences set.
       <preferences-validator>com.abc.portlet.validate.CountryValidator</preferences-validator>
    </portlet-preferences>
  </portlet>
</portlet-app>

Application Server Vs Portal Server
Application server:  A system that provides the execution environment that is at the core of network computing or web-based architectures, providing a full set of services.
Portal server:  an application server running a portal software or a portal application

Open sources portals
             GridSphere, Pluto, uPortal, Jetspeed, jPorta, Cocoon, oPortal, CHEF, Sakai, Liferay, eXo, Redhat Portal Server, Gluecode Portal Foundation Server, Lutece

AJAX

-Sudhamayi
                      Before actually going into AJAX, let’s see what does Rich internet applications provide us with. Rich Internet applications (RIAs) are web applications that have most of the characteristics of desktop applications like
  • The program responds intuitively and quickly
  • Program gives a user meaningful feedback instantly
  • Things happen naturally. A simple event like a mouseover could trigger an event. 
With this knowledge on RIA ,let’s see what Ajax is.

What is AJAX?
                         AJAX is one of the RIA frameworks. AJAX stands for Asynchronous JavaScript and XML. AJAX is a group of interrelated web development techniques used on the client-side to create interactive web applications. AJAX is a technique for making the user interfaces of web applications more responsive and interactive
                                 Ajax is not a technology in itself, but is a term that describes a "new" approach to using a number of existing technologies together, including: HTML or XHTML, Cascading Style Sheets, JavaScript, The Document Object Model, XML, XSLT, and the XMLHttpRequest object combination of which makes the application faster and more responsive

Why do we need AJAX?
                               With AJAX user experiences intuitive and natural user interaction. No clicking is required. Simple mouse movement is a sufficient event trigger. Here "Partial screen update" replaces the "click, wait, and refresh” user interaction model. Only user interface elements with new information are updated rest of them remaining uninterrupted. AJAX is data-driven as opposed to page driven. UI is handled in the client while data is provided by the server.
                            Asynchronous communication replaces synchronous request/response model. A user can continue to use the application while the client program requests information from the server in the background. As most of the part is written with Javascript it is easily understandable.

Comparison with Classic Web application
 As seen in the picture, the communication with the server would be asynchronous. Meaning , if a request is sent to the server for doing some select operation on the database, then the user can still continue filling the rest of his form. Meanwhile the server fetches the data from the backend database server and updates the frontend display. User need not wait for the all the data to be fetched. Similarly, if a form with a lot of fields is there then we need not wait until all the fields are filled, for submitting the form and for subsequent validation. Immediately after filling a form field it could be sent to the server and validated.


Technologies used in AJAX
  • Javascript:
    • Loosely typed scripting language
    • JavaScript function is called when an event in a page occurs
    • Glue for the whole AJAX operation
  • DOM
    •  API for accessing and manipulating structured documents
    • Represents the structure of XML and HTML documents
  • CSS 
    • Allows for a clear separation of the presentation style from the content.May be changed programmatically by Javascript.
  • XMLHttpRequest
    • JavaScript object that performs asynchronous interaction with the server
    XMLHttpRequest object
    • XMLHttpRequest  is a JavaScript object
    • It is adopted by modern browsers like Mozilla , Firefox, Safari and Opera
    • Communicates with a server via standard HTTP
    • GET/POST
    • This object works in the background for performing asynchronous communication with the backend server. This does not interrupt user operation.
    Basic flow of in an Ajax based application
    • A client event occurs (like mouse click or an on change event)
    • An XMLHttpRequest object is created
    • The XMLHttpReques object is configured
    • The XMLHttpRequest object makes an asynchronous request to the server side component
    • The server side component returns an XML document containing the result
    • The XMLHttpReques object calls the callback() function and processes the result
    • The HTML DOM is updated and the intended output is displayed. 
                                     For example, for a login page, User enters user name in a text box. Then, the value will be sent to a server(in turn will be sent to a servlet ). That value will checked for existence in the database. If the value exists then the password field will be enabled otherwise an alert message will be displayed saying that the user does not exist. This is the simplest form of an Ajax based application.

    PROS
    • Most viable Rich Internet Application technology so far
    • Tremendous industry momentum
    • Several toolkits and frameworks are emerging. Ex: SPRY, DWR, Thinwire-Open source Ajax-RIA framework, Taconite-Ajax framework, Buffalo Ajax framework,Salto, Tigermouse-Ajax framework for php, Zaxas etc. 
    • No need to download code & no plug-in required
    • Easy to learn
    CONS
    • Still browser incompatibility
    • JavaScript is hard to maintain and debug
    Usage cases of Ajax
    • Real-time server-side input form data validation
    • Removes the need to have validation logic at both client side for user responsiveness and at server side for security and other reasons. 
    • Auto-Completion: Email address, name, or city name may be auto-completed as the user types
    • Master detail operation: Based on a user selection, more detailed information can be  fetched and displayed
    • Advanced GUI widgets and controls: Controls such as tree controls, menus, and progress bars may be provided that do not require page refreshes
    • Refreshing data: HTML pages may poll data from a server for up-to-date data such as scores, stock quotes, weather, or application-specific data.
    Real life examples of AJAX applications

    Friday, January 8, 2010

    Grid Computing

    -Dhanapathi            
                          Data is growing exponentially. It is growing at the rate of approximately 45GB per person. This is doubling every five years. Data centres in US are consuming 100 billion KW of electricity per year. Finding better ways to store and manage data is not just enough. We need greener ways.

                       Data centres are power hungry. We need solutions that ensure lesser power usage, less cooling, less water usage and less management space. We need more efficient, more flexible and more environment friendly solution i.e. the one that consume less no. of resources.

                       The solution to the problem is Grid Computing. Grid computing is the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations. Sever Virtualization and Clustering allows applications to get a shared pool of IT resources that can be distributed and redistributed.

    What is Grid Computing
                   Grid Computing refers to the automated sharing and coordination of the collective processing power of many widely scattered, robust computers that are not normally centrally controlled, and that are subject to open standards.

                  Grid Computing allows the virtualization of distributed computing and data resources such as processing, network bandwidth and storage capacity to provide a unique system image, granting users and applications access to vast IT capabilities.


    How does it Work
                         User enters the Grid through a Software Interface. For the user to use the Grid he should have proper authentication and authorization privileges. System performs Security Validations. Once the System finds that the user is authenticated then the System takes the user request to Resource Broker. The Resource Broker checks from the catalogue of Information Service for the resource that has the ability to perform the job requested by user. The Resource Broker directs the job to corresponding resource.

    Characteristics
    • Distributed System: - Several computers are connected through a network infrastructure.
    • High Security: - Strict Authentication and encryption are needed.
    • System Management: - Sophisticated management to keep the system running, monitoring, correct the failures and so on.
    • Site Autonomy: - Autonomy of the users must be obeyed. That is users should be allowed to work on their interested system.
    CPU Scavenging
                                   CPU-scavenging, cycle-scavenging, cycle stealing or shared computing creates a “grid” from the unused resources in a network of participants (whether worldwide or internal to an organization). Typically this technique uses desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices. Volunteer computing projects use the CPU scavenging model almost exclusively.
                                   In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.

    Key Problems
    • Security- How to authenticate and authorize users in a  decentralized ways. How to ensure the safety of data that is transmitted.
    • Resource Management:-  How to manage the resources with out a Central Entity.
    • Data Management:- How to copy data to other sites. Since the data goes through Internet, it must be ensured that Data is transmitted at a faster pace.
    • Information Services- How to retrieve data from a particular site. How to find out the apt site. 
    Grid Computing Vs Cloud Computing
              Grid computing is where more than one computer coordinates to solve a problem together. Often used for problems involving a lot of number crunching, which can be easily parallelisable.

                Cloud computing is where an application doesn't access resources it requires directly, rather it accesses them through something like a service. So instead of  talking to a specific hard drive for storage, and a specific CPU for computation, etc. it talks to some service that provides these resources. The service then maps any requests for resources to its physical resources, in order to provide for the application. Usually the service has access to a large amount of physical resources, and can dynamically allocate them as they are needed.        

                In this way, if an application requires only a small amount of some resource, say computation, then the service only allocates a small amount, say on a single physical CPU (that may be shared with some other application using the service). If the application requires a large amount of some resource, then the service allocates that large amount, say a grid of CPUs. The application is relatively oblivious to this, and all the complex handling and coordination is performed by the service, not the application. In this way the application can scale well.
                                                    For example a web site written "on the cloud" may share a server with many other web sites while it has a low amount of traffic, but may be moved to its own dedicated server, or grid of servers, if it ever has massive amounts of traffic. This is all handled by the cloud service, so the application shouldn't have to be modified drastically to cope. A cloud would usually use a grid. A grid is not necessarily a cloud or part of a cloud.
      
    Conclusion
                                      Grid Computing is a strategy to save money and become greener at the same time. Grid Computing is a process that can be considered as a Mainstream Conveyor belt of Modern Engineering Development and Design development. Grid computing can be used in the areas of Designing Cars, Planes, Computers, Chips, New Drugs, Oil and Gas Field, Human Gnome study, Predicting the Climate changes, Calculation of risk in Financial Investment risks. Using Grid Computing as a part of Cloud Computing will increase the system ability and performance.                           

    What is Agile

    -Sudhamayi
                                     Of late I came across a phrase “AGILE”- ‘a software development methodology’ when I was going through some technical post. Immediately I searched for the word “AGILE” in answers.com and it gave me the answer “Characterized by quickness, lightness, and ease of movement; nimble”. I got a bit confused about how on earth this quickness, ease of movement come into picture in software development. My questions started at what it means by saying quickness in software development.
    There came my answer from the panacea “Google”.

                                          Quickness in “Agile” software development methodology is achieved by slicing a large part of work and allocating short amounts of time for each slice of work , ensuring that at the end of each time interval we have a working software with minimal bugs and which meets the functional and technical requirements decided upon for that slice of work. With this introduction we can now start diving a bit deep into the topics of Agile methodology, introduction and how it varies from the traditional approaches.

    Introduction
                        Agile is a group of software development methodologies based on iterative model of development. Here, the client requirements are not frozen as in case of traditional model (ex: Waterfall model) and the requirements evolve continuously. The stress in Agile methodology would be on delivering a high quality software product at the end of each iteration .Even though the product may not meet all the complete set of requirements it should meet all the requirements decided upon for that iteration or the interval of time.
                               Basic idea behind Agile is, breaking the tasks into small increments with minimal planning. Each iteration may last up to one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design ,coding, unit testing and acceptance testing when a working product is demonstrated to stakeholders. This helps minimize overall risk, and lets the project adapt to changes quickly. Stakeholders produce documentation as required. Iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations may be required to release a product or new features.
    Team composition in an agile project is usually cross-functional and self-organizing without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality, iteration requires. They decide individually how to meet iteration’s requirements.
                                      Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc. Team size would be typically between 5 to 9.
                                     Each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment and ensuring alignment with customer needs and company goals. This avoids the last minute confusions about the requirements the customer’s expecting from the development team. Also it avoids the costly rework in case of requirements that are not completely understood. Hence customer satisfaction would be more.

    Success of Agile methodology of software development is largely credited to the team-work, collaboration and the skill of the individuals.

    Some of the principles behind the Agile Manifesto are:
    • Customer satisfaction by rapid, continuous delivery of useful software
    •  Working software is delivered frequently (weeks rather than months)
    • Working software is the principal measure of progress
    • Even late changes in requirements are welcomed
    • Close, daily cooperation between business people and developers
    • Face-to-face conversation is the best form of communication (co-location)
    • Projects are built around motivated individuals, who should be trusted
    • Continuous attention to technical excellence and good design
    • Simplicity
    • Self-organizing teams
    • Regular adaptation to changing circumstances
    Comparison with other methods
            
                 A. Traditional methods (like waterfall) fall under Predictive category of development methodologies. Meaning we can report features and tasks planned at any point of time and predict the same for some future date also. But changing the direction in case of the traditional methods would cost the code developed to the point to be thrown out .So adapting to change of requirements would be difficult .Hence the name “Predictive”.
                               In contrast, Agile focuses on the adaptive mode meaning at any point of time change is welcomed. When the needs of a project changes, an adaptive team changes as well. But the Agile team would have difficulty in reporting what will happen on a future date. Hence the name “Adaptive”.
            
               B. The waterfall model is the most structured of the methods, stepping through requirements-capture, analysis, design, coding, and testing in a strict, pre-planned sequence. Progress is generally measured in terms of deliverable artifacts: requirement specifications, design documents, test plans, code reviews.
          
              C.  The main problem with the waterfall model is the inflexible division of a project into separate stages, so that commitments are made early on, and it is difficult to react to changes in requirements. Iterations are expensive. This means that the waterfall model is likely to be unsuitable if requirements are not well understood or are likely to change in the course of the project
               Agile methods, in contrast, produce completely developed and tested features (but a very small subset of the whole) every few weeks. The emphasis is on obtaining the smallest workable piece of functionality to deliver business value early, and continually improving it and adding further functionality throughout the life of the project.
               
             D. If a project being delivered under the waterfall method is cancelled at any point up to the end, there is nothing to show for it beyond a huge resources bill.
                          With the agile method, being cancelled at any point will still leave the customer with some worthwhile code that has likely already been put into live operation.

    Selecting Agile methodology
             Factors influencing the selection of Agile as development methodology:
    • Low criticality
    • Senior developers
    • Requirements change often
    • Small number of developers
    • Culture that thrives on chaos
    Well-known agile Software development methods
    • Agile Modeling
    • Agile Unified Process (AUP)
    • Agile Data Method
    • DSDM
    • Essential Unified Process (EssUP)
    • Extreme programming (XP)
    • Feature Driven Development (FDD)
    • Getting Real
    • Open Unified Process (OpenUP)
    • Scrum
    • Lean software development
                                 Agile software development depends on some special characteristics possessed only by software, such as object technologies and the ability to automate testing. However, related techniques have been created for developing non-software products, such as semiconductors, motor vehicles, or chemicals

                                                  I conclude this article by stating that “Agile” may not be a solution for all the projects but still it’s a good option considering the duration, risk, effort, team size, skill of the developers and the requirements agility.

    Sunday, January 3, 2010

    Cloud Computing

    -CNR
    Every organization needs each employee to have the right hardware and right licenced softwares to do their jobs. Getting all these is sometimes tedious. Instead of installing a suite of software for each computer, you'd only have to load one application. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs. It's called cloud computing, and it could change the entire computer industry.

                                 In a cloud computing system, Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease. The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.

                             The Cloud Computing service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.


    Cloud Computing Architecture
                      A cloud computing system is divided into two sections: the front end and the back end. They connect to each other through a network, usually the Internet.

    The front end includes the client's computer (or the network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.

    On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server. A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other.

                    A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

    Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories:
    • Infrastructure-as-a-Service (IaaS), 
    • Platform-as-a-Service (PaaS) 
    • Software-as-a-Service (SaaS). 
    Infrastructure-as-a-Service (eg., Amazon) provides virtual server instances with unique IP addresses and blocks of storage on demand. Customers use the provider's application program interface (API) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required.

    Platform-as-a-service (eg., GoogleApps) in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

    In the Software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

    Saturday, January 2, 2010

    Understanding Web Services

    -SatyaPrakash
    Web Service is defined by W3C as a software system for making different computers interact. The computers may have different operating systems, different inner structure, but the logic of Web Service should make it possible for them to interact and make operations. This is the most general definition. But in practice,

    Web Services usually make use of following:
         • SOAP as messaging envelope format
         • WSDL as service description format
         • UDDI for defining metadata that is used for service discovery
         • WS-Security that defines standards for security issues
         • WS-Reliable Messaging as the protocol for reliable messaging

    These are the standards proposed by Organization for the Advancement of Structured Information Standards (OASIS) and W3C. Web Services Interoperability Organization (WSI) also creates some applications and tools to enhance Web Services interoperability. Web Service development based on these standards is supported by different implementation platforms such as .Net and J2EE.
                             In short, Web Services are a combination of a discovery system (UDDI), a description system (WSDL) and the Web Services themselves.
    The discovery system (UDDI) is used by a hosting business to enter information describing themselves, publishing taxonomy along with the descriptions (WSDL) of services they provide. This allows another business to find their service by searching with some criteria. It also supports category-based search options.
    After an appropriate service is found, a description (WSDL) is demanded. This description contains:
           • Operations included in the service
           • Input and output messages for each operation

    The main benefits of the Web Services are:
           • Decoupling of service interfaces from implementations and platform considerations
           • The enablement of dynamic service binding
           • Increase in cross-language, cross platform interoperability

    A Minimalist Model of A Web Service
    In the core of Web Services, there are three basic elements:

    The Service. This is a software that is able to process an XML document it receives through some combination of transport and application protocols. The inner structure of the service is irrelevant. It is only necessary that it responds to a special format of XML files.

    For the service to interpret the information in XML documents there must be a shared description of an XML request-response format. This is described in an XML Schema. WSDL is commonly used but not necessary.

    The address of the service using a certain protocol. This is necessary to access the service. It can be an address for TCP or HTTP protocol but not necessarily is.

    The fourth element is an envelope for the XML document. It is optional but very helpful, for example for putting additional information such as security and routing. SOAP envelopes are commonly used standard.
    SOAP has two parts: A header that keeps all system information, and a body that contains the XML document to be processed by the Web Service. If we look at this simplistic model of the Web Services technology, it seen that this technology is very simple in its core.

    The Roles
    In Web Services, there are three roles and their operations.
    A service registry keeps track of recorded services.
    A service provider can create a Web Service and its service definition or publish the service with a service registry.
    A service requester can find the service from registry to get WSDL service description and a service URL or bind to the service and invoke it.

    Web Services programming stack
    To understand the operations of Web Services, let us have a look at The Web Services programming stack. This stack is a collection of standardized protocols and application programming interfaces (APIs) that lets individuals and applications locate and utilize Web Services. Layers in the stack correspond to operations made by one of three roles above. First three layers are crucial to the system. The remaining layers are optional and will be used as business needs require them.

    Below is a list of Web Services programming stack layers:
    1) Network layer is at the bottom of the stack. It is the place of the protocol to transfer messages. Most often used such protocol is HTTP.

    2) Messaging layer comes after the network layer. It is the communication platform between Web Services and their clients. All operations, publishing, finding, binding, and invoking is accomplished by sending messages. Most widespread message protocol is XML-based SOAP.

    3) Service description layer. This is where service description rules are applied. WSDL is used for this purpose. Service providers describe their available Web Services to clients.

    4) Service publication layer. This layer is about a service provider making its WSDL document available. UDDI can be used, but it is not necessary. Just sending an email attaching your WSDL document is considered as publishing.

    5) Service discovery layer. This layer is where a service requester gets someone's WSDL document. UDDI is a possible interface, but receiving an e-mail with a WSDL document is also considered as discovery.

    There are also higher level layers that are yet to be commonly used and standardized.


    Common Misconceptions
    As Web Services gained popularity, there appeared several misunderstandings about the concept. We are going to focus on main misconceptions about Web Services below.
    1) Web Services are message oriented: There are two main middleware approaches in terms of interaction models. One is message-oriented middleware (MOM), which includes Web Services. The other is object-oriented middleware that is very different in nature. Applications that are based on CORBA or Enterprise Java Beans belong to this category. The interaction models determine most things about a system.

    2) Web Services are not distributed objects: Web Services are very similar to distributed objects in the sense that they both have a certain description language, they incorporate well-defined network interactions, and they bear similar mechanisms for registering and discovering available components. However, Web Services are not distributed objects.

    In a distributed object system, every object, with the exception of singleton objects, go through the following life cycle:
         • On demand, a factory creates the object instance
         • The consumer who wanted the creation performs various operations on the object instance
         • Sometime later, either the consumer releases the instance or the runtime system garbage collects it

    3) Web Services are not RPC for the Internet: Remote Procedure Calling is done through a network abstraction for remotely executing procedure calls in a certain programming language. This low level operation requires the caller to identify the remote procedure, to decide which state must be provided to the procedure at invocation time, and what form to use to present the results to the caller at completion time.

    4) Web Services do not need HTTP: Web Services do not depend on the transport protocol that is used. SOAP messages may well be transported through plain TCP or UDP, or even SMTP by e-mail. There are also alternatives such as MQSeries or Java messaging service (JMS) to name a few. These technologies can also be combined. For example a SOAP package that comes to the system through HTTP may be automatically forwarded to a certain server through TCP or JMS. Web Services are not specifically designed for HTTP, though HTTP is most commonly used.

    5) Web Services does not need Web Servers: There was a discussion whether to drop the “Web” from Web Services. Terms such as service-oriented architectures, service-oriented integration do not mention “Web”, because they are independent of Web Servers. Early Web Services have exploited Web servers' application-server functionality, but now there are several toolkits that let you develop and integrate Web Services without using Web server infrastructures.

    Advanced Portlet APIs

    -Rahul Gupta
    This section covers more advanced topics of the Portlet API, such as accessing user information, calling portal's context, localization, and caching.

    User information
                    Some portlets may want to personalize the produced markup depending on personal user information. Such information is called user profile information and contains details such as a user's name, address, email, and so on. For example, employing a user's profile, a weather portlet can present the weather for the city the current user lives in.
                       The Portlet API supports access to user profile information via the request attribute USER_INFO as a map. The Portlet Specification recommends using the P3P (Platform for Privacy Preferences) user attribute names. A portlet can define in the deployment descriptor which user information attributes it would like to access via this map in the request. The portal can then either map these attributes at deployment time to the attributes in its user datastore or ignore them if they are not supported.
                    The user information is always about the user triggering the current action and may differ from the J2EE principal, as the portlet could be called as a remote portlet. If the portlet is called as remote (a WSRP producer), the J2EE principal represents the calling WSRP consumer and not the user triggering the request.

    Portal Context
    To let portlets adapt to the portal that calls them, the Portlet API provides the PortalContext, which can be retrieved from the request. This portal context provides information such as the portal vendor, the portal version, and specific portal properties. The information allows the portlet to use specific vendor extensions when being called by a portal that supports these extensions and, when being called by other portals, to return to some simpler default behavior. As the portal context is attached to the request, it may change from request to request. This can be the case in the remote scenario, where one portlet (WSRP provider) may be called from different portals (WSRP consumers).

    Localization
                              The Portlet Specification provides localization on two levels: deployment descriptor and portlet. On the deployment descriptor level, all settings intended to be viewed or changed by the Web server administrator (portlet description, init parameters, display name, and so on) consist of an xml:lang attribute, like the Servlet 2.4 deployment descriptor. The xml:lang tag provides the same tag with descriptions in different languages (e.g., a display name in English, German, and Japanese).
    On the portlet level, the specification sets a resource bundle class in the deployment descriptor that contains the portlet title's localized versions, the short title for graphically restricted devices, and keywords describing the portlets' functionalities. In addition to this information, the specification also recommends a notation for localizing the preference attribute display names, values, and descriptions. The portlet can access the resource bundle via the PortletContext's getResourceBundle() method.

    Caching
                   JSR 168 defines support for expiration-based caching, both declaratively and programmatically. In the deployment descriptor, the portlet response's expiration time can be defined per portlet. During runtime, the portlet can set an expiration time for the render response using the EXPIRATION_CACHE property.
    Any request targeted to this portlet automatically expires the cached content. This means that when a user clicks on this portlet's action or render link, the portlet gets called independently of the remaining cache expiration time.  

    Extensibility
                        The Portlet Specification has several mechanisms that allow portal vendors to include their specific extensions. This section explains the most important mechanisms: vendor-specific properties, custom portlet modes, and custom window states. As explained in the "Portal Context" section, the portlet can obtain information about a portal's supported extensions via the portal context.
    Properties
                      Properties communicate vendor-specific settings between the portlet and portlet container, and between the portlet and the portal.
    These properties show up in several places in JSR 168. The portlet can read String properties with getProperty() from:
    ActionRequest, to receive properties that are action-request specific
    RenderRequest, to receive properties that are render-request specific
    PortalContext, to receive properties that are portal specific and do not change for different requests
    The portlet can write String properties with setProperty() to:
    ActionResponse, to set properties in response to an action request
    RenderResponse, to set properties in response to a render request

    Custom portlet modes and window states
                                    If a portlet application uses additional portlet modes or window states than the ones defined by the Portlet Specification, the portlet application can declare them in the deployment descriptor. At deployment time, the defined custom portlet modes and window states are either mapped to vendor-specific portlet modes and window states the current portal supports, or ignored. A portlet can use only portlet modes or window states supported by the portal running the portlet. The portlet can use the PortalContext's getSupportedPortletModes() and getSupportedWindowStates() methods to retrieve the portlet modes and window states the portal supports.
    The portlet can define custom portlet modes and window states in the deployment descriptor by using the custom-portlet-mode and custom-window-state tags.

    Packaging and deployment
                     A portlet application's resources, portlets, and deployment descriptors are packaged together in one Web application archive (war file). In contrast to servlet-only Web applications, portlet applications consist of two deployment descriptors: one to specify the Web application resources (web.xml) and one to specify the portlet resources (portlet.xml). All Web resources that are not portlets must be specified in the web.xml deployment descriptor. All portlets and portlet-related settings must be specified in an additional file called portlet.xml. There are three exceptions to this rule, which are all defined in the web.xml, as they are valid for the whole Web application:
    1. The portlet application description
    2. The portlet application name
    3. The portlet application security role mapping
                                        As result of the two deployment descriptor files, portlet application deployment is a two-step deployment that deploys the Web application into the application server and the portlets into the portal server.
                                     A portlet.xml file always describes only one specific portlet application. To create a copy of a portlet application with slightly different settings, a new portlet application must be created.

    Figures 1 and 2 depict the portlet.xml file's complete schema definition.
    Figure 1 shows the settings that can be applied on the application level,


    Figure 2 depicts the settings on the portlet level.