Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
downtime analyzer full report
#1

ABSTRACT
This project is aimed at developing a Downtime Analyzer.Downtime ensures your production facility is operating at its peak efficiency by identifying where, how and why production delays are occurring. By reporting scheduled and unscheduled events, as well as under-performing equipment, this module enables a complete analysis of production downtime causes. This gives the information needed to prioritize maintenance, order new equipment and improve operating procedures. The automatic capture of downtime events by the module frees operators from paperwork and removes the inaccuracies and inconsistencies of manual systems.
What it means:
Business managers can compare the downtime events of multiple plants and make knowledge-based decisions to increase productivity.
Plant and production managers can prioritize maintenance and the purchase of new equipment, revise operating procedures to optimize uptime and continuously improve facility effectiveness.
The Downtime Analyzer works in a client-server architecture and can be used over the net. Any software that needs to be installed in the client requests the permission from server,gets the response and performes downtime analysis.
INTRODUCTION
Downtime Analyzer clearly and accurately identifies both real and virtual downtime events within the production facility by connecting to multiple process control software systems such as HMI / SCADA and OPC. Downtime events are categorized according to cause, location and the company s time-usage model such as availability, reliability, utilization. Operators and supervisors are alerted to downtime events by a real-time alert messenger which assists in the timely entry or confirmation of information. This ensures that the data stored is as accurate as possible, and has been reviewed by operations prior to being used for decision-making. Downtime events can also be split or merged to accommodate multiple causes for a single event. For example, an electrical fault may have stopped the plant, but a mechanical fault prevented it from starting again. Authorized personnel can also confirm individual events to ensure no further changes are made to the recorded information without generating an audit trail of changes. Downtime is delivered via the web and has full filter, search, reporting and diagnostic capabilities enabling remote personnel to accurately diagnose problems and prioritize resolutions. The client tools provide high-level summary displays with full drill-down capabilities to the raw data beneath to clearly identify root causes of systemic problems. Managers can easily navigate Downtime using the explorer tree, which mimics the physical hierarchy of the plant. Data is rolled up the hierarchy so that higher levels of the explorer tree aggregate the data information found on lower levels, making summary reporting easier
Downtime provides plant-centric information to improve uptime and asset utilization.
It becomes even more powerful when integrated with other modules, such as Metrics, Production and Quality, because it can then deliver Key Performance Indicators which allow management to easily track the performance of assets.

The main objective of the system is to develop an interactive application Downtime Analyzer offers powerful but easy to manage the entire web site. It has two interfaces .One is the server and the second one is client. Both these user differ in their functional areas. Server and clients are provided with username and password. The security of the system is thus maintained .
SYSTEM ANALYSIS
SYSTEM STUDY
A system is a combination of resources working together to convert input to output that moves through a series of stages or phases to deliver the system in-line with the user requirements. Analysis is a detailed study of the various operations performed by a system and their relationships within and outside the system.
The study phase is the first phase involved in the creation of a computer-based system. It is the phase in which problems are identified, alternate solutions are evaluated and the most feasible system recommended during a preliminary analysis is permitted. A technical and economical evaluation of the proposed system is conducted.
REQUIREMENT ANALYSIS
Requirement analysis is a software engineering task that bridges the gap between system level software allocation and software design.
Requirement analysis enables the system engineer to specify software functions and performance, indicate software s interface with system elements and establish constraints that software must meet. Requirements analysis allows the software engineer to refine the software allocation and build models of the data, functional and behavioral domains that will be treated by software. Requirement analysis provides the software designer with models that can be translated into data, architectural, interface and procedural design.
Finally, the requirements specification provides the developer and the customer with the means to assess quality, once the software is built.
Software requirement analysis may be divided in to five areas of effort:
Problem Recognition
Evaluation and Synthesis
Models of the system
Specification for the software
Review
Initially the analyst studies the system specification and the software project plan. It is important to understand software in a system context and to review the software scope that was used to generate the planning estimates. Next, communication for analysis must be established so that recognition of the basic problem elements as perceived by the user/customer is understood.
Problem evaluation and solution synthesis is the next major area of effort analysis. The analyst defines all externally observable data objects, evaluates the flow and content of information: defines and elaborates all the software functions; understands software behaviours in the context of the events, that affect the system; establishes system interface characteristics; and uncovers additional design constraints. Each of these tasks serves to describe the problem so that an overall approach or solution may be synthesized.
During the evaluation and solution synthesis activity, the analyst creates models of the system, in an effort to better understand the data and control flow, functional processing, behavioral operation, and information content. The model serves as a foundation for software design and as the basis for the creation of a specification for the software.
EXISTING SYSTEM

Most people find what they re looking for on the World Wide Web by using search engines like Yahoo!, Alta Vista, or Google. It is the search engines that finally bring your website to the notice of the prospective customers. Hence it is better to know how these search engines actually work and how they present information to the customer initiating a search. When you ask a search engine to locate the information, it is actually searching through the index which it has created and not actually searching through the Web. Different search engines produce different rankings because not every search engine uses the same algorithm to search through the indices. Many leading search engines use a form of software program called the spiders or crawlers to find information on the Internet and store it for search results in giant databases or indexes. Some spiders record every word on a Web site for their respective indexes, while others only report certain keywords listed in title tags or meta tags. Search Engines use spiders to index the websites. When you submit your website pages to a search engine by completing their required submission page, the search engine spider will index your entire site. A spider is an automated program that is run by the search engine system. Search engine indexing collects, parses, and stores the data to facilitate fast and accurate information retrieval. Spiders are unable to index pictures or read text that is contained within graphics, relying too heavily on such elements was a consideration for the online marketers.
PROPOSED SYSTEM
Grid computing based Search Engine provides an alternate approach to reliability that relies more on software technology than expensive hardware. The systems in a grid can be relatively inexpensive and geographically dispersed. Thus, if there is a power or other kind of failure at one location, the other parts of the grid are not likely to be affected. In critical, real-time situations, multiple copies of the important jobs can be run on different machines throughout the grid. In this search engine using grid, if we are giving a topic to search, we will get the information about that topic. It may contain web pages, images etc. the main advantage of this search engine is that, it gathers the results from different search engines like google, yahoo search, etc, By avoiding or eliminating duplicates in the result..here the role of grid comes.

FEASIBILITY STUDY
All projects are feasible-given unlimited resources and infinite time. Unfortunately, the development of a computer based system or product is more likely plagued by a scarcity of resources and difficulty in delivery dates. It is both necessary and prudent to evaluate the feasibility of a project at the earliest possible time.
Feasibility and risk analysis are related in many ways. If project risk is great, the feasibility of producing quality software is reduced. During product engineering however, we concentrate our attention on four primary area of interest.
Economic Feasibility
An evaluation of the development cost, weighed against the ultimate income or benefit derived from the development system or product.
Technical Feasibility
A study of the functions, performances and constraints that may affect the ability to achieve an acceptable system.
Legal Feasibility
A determination of any infringement, violation or liability that could result from the development of the system.
Alternative
An evaluation of alternative approaches for the development of the system or product. A feasibility study is not warranted for a system in which economic justification is obvious, technical risk is low, few legal problems are expected, and no reasonable alternatives exist. However if any of the preceding condition fails, a study of that area should be conducted.
Feasibility study is reviewed by the project management (to access content reliability) and by upper management (to access project status). The study should result in a go/no-go decision. It should be noted that other go/no-go decision would be made during the planning, specification, and development steps of both hardware and software.

SYSTEM REQUIREMENTS AND SPECIFICATION
SOFTWARE SELECTION
Software selection is an important work in a project development cycle. Software must be selected in accordance with the application and the latest technology available. Java Server Pages is the best choice.
JAVA SERVER PAGES
Java Server Pages (JSP) lets us separate the dynamic part of our pages from the static HTML. We simply write the regular HTML in the normal manner, using whatever Web-page building tools we normally use. We then enclose the code for the dynamic parts in special tags, most of which start with <% and end with %>.
We normally give the file a .jsp extension and typically install it any place we could place a normal Web page. Although what we write often looks more like a regular HTML file than a servlet, behind the scenes the JSP page just gets converted to a normal servlet, with the static HTML simply being printed to the output stream associated with the servlet s service method. This is normally done the first time the page is requested, and developers can simply request the page themselves when first installing it if they want to be sure that the first real user doesn t get a momentary delay when the JSP page is translated to a servlet and the servlet is compiled and loaded. Many Web servers let us define aliases that so that a URL that appears to reference an HTML file really points to a servlet or JSP page.
Aside from the regular HTML, there are three main types of JSP constructs that we embed in a page: scripting elements, directives and actions. Scripting elements let us specify Java code that will become part of the resultant servlet, directives let us control the overall structure of the servlet, and actions let us specify existing components that should be used, and otherwise control the behaviour of the JSP engine.
JSP Element Syntax Interpretation: JSP Expression <%= expression%> Expression is evaluated and placed in output.JSP Script let <% code%> Code is inserted in service method.JSP Declaration <%! Code %> Code is inserted in body of servlet calss, outside of service method.JSP page Directive <%@ page att=val%> Directions to the servlet engine about general setup.JSP include Directive <%@ include file=url%>A file on the local system to be included when the JSP page is translated into a servlet.JSP Comment <%--comment--%>Comment; ignored when JSP page is translated into servlet.JSP Scripting elements let us insert Java code into the servlet that will be generated from the current JSP page. There are three forms
1.Expressions of the form <%=expression%> that are evaluated and inserted into the output.
2.Scriptlets of thr form <%code%> that are inserted into the servlet s method and,
3.Declarations of the form <%!code%> that are inserted into the body of the servlet class,outside of any exisisting method.
A JSP Directive affects the overall structure of the servlet s class.It usually has the following form. <%@directive attribute=value%>.However ,we can also combine multiple attribute settings for a single directive as follows
<%@directive attribute1=value1
attribute2=value2
..
attributeN=valueN%>
There are 2 main types of directives: page, which lets us do things like import classes, customize the servlet super class, and the like; and include, which lets us insert a file into the servlet class at time the JSP file is translated into a servlet. The specification also mentions taglib directive, which is not supported in JSP version 1.0,but is intended to let JSP authors define there own tags.
The page directive lets us define one or more of the following case sensitive attributes:
import =package.class or import =package.class1,..package.classN. This lets us specify what packages should be imported. For example:<%@page import =java.util.*%> this is allowed to appear multiple times.
contenttype=MIME-Typeor contenttype=MIME-Type;charset=Character-Set this specifies the MIME-Type of the output. The default is text/html.
session=true/false. A value of true indicates that the predefind variable session
should be bound to the existing session if one exist, otherwise a new session should be created and bound to it.
extends=package.class This indicates the super class of servlet that will be generated.
info=message This defines a string that can be retrieved via the getservletinfo method
errorPage=url This specifies a JSP page that should process any Throwables thrown but noy caught in the current page.
This directive lets us include files at the time the JSP page is translated into a servlet.The directive look like this:
<%@include file= relative url%>
The URL specified is normally interpreted relative to the JSP page that refers to it, but, as with relative URLs. In general, you can tell the system to interpret the URL relative to the home directory of the Web server by starting the URL with a forward slash. The contents of the included file are parsed as regular JSP text, and thus can include static HTML, scripting elements, directives and actions.
The client-server model
In a client-server model, two computers work together to perform a task. A client computer requests some needed information from a server computer. This server returns this information, and the client acts on it.
Web-Server
A web server is a computer that contains all the web pages for a particular web site and has a special software installed to send these web pages to web browsers that request them. When a web browser requests a JSP page the following steps occur.
1. The client locates the web server specified by the first part of the URL.
2. The client then requests the JSP page specified by the second part of the URL.
3. The web server reads the JSP files and process the code
4. After asp page has been completely processed by the web server, the output is sent in HTML format to the client.
5. The client receives the HTML sent by the server and renders it for the users.
A JSP application is simply one or more web pages with additional scripting commands executed on the server. A JSP is composed of primarily of the following elements.
Server-side includes (optional)
HTML code
Script delimiters
Script codes
JSP objects(optional)
JAVA SCRIPT
Java script is a very new language- even newer than JAVA. Despite its newness it has attracted great attention because pf its expressive power. JavaScript has the power to create more attractive, dynamic and interesting web pages. No programming language is required to write JavaScript, but some knowledge of HTML and Web page authoring is assumed.
JavaScript is mostly used for client side scripting. It is mainly used for validating the user input. Invalid user input will either cause the data to be sent back from the web server to the browser or give rise to an error. The web browsers like Netscape do not support VBScript but are able to support JavaScript. As the JavaScript is difficult to implement, it is not use for server side scripting.
JavaScript is an object-oriented language that allows creation of interactive web pages. JavaScript allows user entries, which are loaded in to an HTML form to be processed as required. This empowers a web site to return side information according to user s requests. JavaScript is traditionally embedded in to a standard HTML program, between the <SCRIPT>..</SCRIPT> HTML tags. It is embedded in to an HTML program because JavaScript uses the file name with extension HTML and HTTP protocol to transport itself fro the web server to the client s browser where the JavaScript executes and processes client in formation. Only a browser that is JavaScript enabled will be able to interpret JavaScript code.
The JavaScript Document Object Model
Using the Document Object Model JavaScript enabled browsers identify the collection of web page objects that have to be dealt with while rendering an HTML base, web page in the browser.
The HTML object which belong to the DOM, have a descending relation with each other. The topmost object in the DOM is a Navigator itself. The next level in the DOM is a browser s window . The next level in DOM is the Document displayed in the browser s window. If the document displayed in the browser s window as an HTML Form coded in it, then the next level in the DOM is the Form itself. The DOM hierarchy continues downward to encompass individual elements on a Form , such as text boxes, labels, radio buttons, check-boxes and so on, which belong to the form. JavaScript s object hierarchy is mapped to the DOM, which in turn is mapped to the web page elements in the browser window.
No HTML object is registered in the DOM by a JavaScript enabled browser unless assembled in memory prior being rendered in the browser window. That is, if the document does not have any links described in it, then the page link object will exist but it will be empty. Each object exists in a set relationship with other object on the web page. Other objects currently recognised by a JavaScript enabled browser are plug-ins, applet and images.
Apache Tomcat
Apache Tomcat is an implementation of the Java Servlet and JavaServer Pages technologies from Sun Microsystems, and provides a "pure Java" HTTP web server environment for Java code to run. The Java Servlet and JavaServer Pages specifications are developed under the Java Community Process. Apache Tomcat is developed in an open and participatory environment and released under the Apache Software License. Apache Tomcat is intended to be a collaboration of the best-of-breed developers from around the world.
Apache Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Tomcat should not be confused with the Apache web server, which is a C implementation of an HTTP web server; these two web servers are not bundled together. Apache Tomcat includes tools for configuration and management, but can also be configured by editing XML configuration files.
Tomcat 5.x
implements the Servlet 2.4 and JSP 2.0 specifications
reduced garbage collection, improved performance and scalability
native Windows and Unix wrappers for platform integration
faster JSP parsing
Members of the ASF and independent volunteers develop and maintain Tomcat. Users have free access to the source code and to the binary form of Tomcat under the Apache License. The initial Tomcat release appeared with versions 3.0.x (previous releases were Sun internal releases, and were not publicly released). Tomcat 6.0.18 is the latest production quality release of the 6.0.x trunk (the branch for the 2.5 servlet specification), as of 2008.
Catalina
Catalina is Tomcat's servlet container. Catalina implements Sun Microsystems' specifications for servlet and JavaServer Pages (JSP). The architect for Catalina was Craig McClanahan.
SQL
SQL is the most popular open source database server in existence. On the top of that, it is very commonly used in conjunction with JavaScript to create powerful and dynamic client-side applications.
SQL has been criticized in the past for not supporting all the features of other popular and more expensive database management systems. However, MySQL continues to improve with each release, and it has become widely popular with individuals and businesses of many different sizes.
Features Of SQL:
Handles large database.
Written in c and c++.
Tested with a broad range of different compilers.
Works on many different platforms.
The SQL server design is multi-layered with independent modules.
Fully multi-threaded using kernel threads. It can easily use multiple CPUs if they are available.
SQL functions are implemented using a highly optimized class library.
Fixed-length and variable-length records.
A privilege and password system that is very flexible and secure, and that allows host-based verification.
Passwords are secure because all password traffic is encrypted when you connect to a server.
Many data types:
SOFTWARE & HARDWARE SPECIFICATION
SOFTWARE SPECIFICATION
OPERATING SYSTEM: Windows/Linux
FRONT- END: Java Server Pages
BACK- END: SQL Server
HARDWARE SPECIFICATION
Processor: Pentium(3) or higher
128MB RAM
8GB Hard Disk or more
SVGA colour monitor
104 standard keyboard
3 Button Serial USB or PS/2 mouse
Floppy Disk Drive : 144 MB
CD Rom Drive: 24x
SYSTEM DESIGN
Design is the first step in the development phase for any engineered product or system. It may be defined as: the process of applying various techniques and principles for the purpose of defining a device ,a process or a system in sufficient detail to permit its physical realisation . Computer software design like engineering design approaches in other disciplines changes continually as new methods, better analysis and broader understanding evolve.

Using one of a number of design methods the design step produces a data design, architectural design and procedural design. Preliminary design is concerned with transformation requirements to data and software architectures. Detailed design focus on refinements to architectural representations that lead to detailed data structure and algorithmic representation for software . The data design transform the information domain model created during analysis into the data structures that will be required to implement the software. The architectural design defines the relationship among major structural components into a procedural description of the software.
The components of an information system described during requirement analysis are the focal points in the system design. Analysis must deaign the fillowing elements:
Data Flows
Data Stores
Processes
Procedures
Controls
Roles
Data flow analysis permits analysts to isolate areas of interest in the organisation and to study them by examining the data that enter the process and seeing how they are changed when they leave the process. In the Data Flow Diagrams, the physical system is translated into a logical description that focuses on data and processes. It is advantageous to emphasis data and processes in order to focus on the actual activities and the resources needed to perform them, rather than on who performs the work.
LOGICAL DESIGN
In the logical design, description of the inputs, outputs, databases and procedures are given in a format that meets the requirements.
DATA FLOW DIAGRAMS
Data Flow Diagram (DFD) is used to define the flow of the system and its recourses such as information. Data Flow Diagrams are a way of expressing system requirements in a graphical manner. DFD represents one of the most ingenious tools used for structured analysis. A DFD is also known as a bubble chart . It has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design.
In the normal convention, logical DFD can be completed using only for notations.

Represents source/destination data
Represents the data flow

: Represents a process that transforms incoming data into outgoing flow
Represents data source
The DFD at the simplest level is referred to as the CONTEXT ANALYSIS DIAGRAM . These are expanded by level, each explaining its process in detail. Process are numbered for easy identification and are normally labelled in block letters. Each Data Flow is labelled for easy understanding.

CONTEXT LEVEL: DOWNTIME ANALYZER

DTA
LEVEL 1: REGISTRATION
Server Details Client Details
LEVEL 2: SERVER





Server_details
LEVEL 2.1



Client_details Product_details Download_details
LEVEL 2.2

Category_details Product_details
LEVEL 2.3
Client_details Category_details Download_details
LEVEL 2.4

Download_details
Transaction_details
LEVEL 3 : CLIENT

Client_details
PHYSICAL DESIGN
INPUT DESIGN
The collection of input data is the most expensive part of the system in terms of the equipment used and the number of people involved. In input design,data is accepted for computer processing and the input into the system is done through mapping via some map support or links.
In this project user interface is done using a highly flexible and efficient input design. Input design is the process of converting user inputs into computer based format. The project requires a set of information from the user to prepare a report. In order to prepare a report, well-organised input data are needed from the external application, which needs high security. The external applications like banking, military etc,present data in the form of messages or information in plain text files, which needs to be typing.
During the design time the programming task is accomplished by using keyboard and mouse to visually design and to write application. Here we select controls such as pushbuttons, radiobuttons and scrollbars with the mouse ,drag them to the application or to the designed dialog ,as it is build.
OUTPUT DESIGN
Output is the important and direct source of information to the user and to the management. Efficient and eligible output design should improve the system s relationship with the user and help in decision making. Output design generally deals with the results generated by the system i.e., reports. These reports can be generated from stored or calculated values.
Reports are displayed either as screen preview or printed form. Most end users will not actually operate the information systems or enter data through workstations, but they will use the output from the system .The input details should contain all the facilities of the Server and the Client.

DATABASE DESIGN
The general theme behind a database is to handle information as as integrated whole. A database is a collection of interrelated data stored with minimum redundancy to serve many users quickly and efficiently. The general objective is to make information access easy, quick, inexpensive, and flexible for the user.
In a database environment, common data are available in which several users can use. The concept behind a database is an integrated collection of data and provides a centralized access to the data from the program. It makes possible to treat data as a separate resource. While designing database, several objectives must be considered:
Controlled Redundancy
Data Independence
More information at low cost
Accuracy and Integrity
Recovery from failure
Privacy and Security
Performance
TABLES
Table 1:Category_details
Field Name Data Type Field Size Description
Cname Varchar 50 Categoryname
cid Int 4 Categoryid
Subname Varchar 50 Subcategoryname
Table 2: Client_details
Field Name Data Type Field Size Description
userid Int 4 Userid
name Varchar 50 Name
address Varchar 50 Address
pin Int 4 Pin
phone Varchar 50 Phone
mail Varchar 50 Mail
regdate Datetime 8 Registrationdate
login Varchar 50 Login
password Varchar 50 Password
Table 3: Product_details
Field Name Data Type Field Size Description
pid Int 4 Productid
cid Int 4 Categoryid
pro_name Varchar 50 Productname
trial_days Int 4 Trialdays
launch_date Datetime 8 Launchingdate
status Varchar 50 Status
Table 4: Server_details
Field Name Data Type Field Size Description
logid Int 4 Loginid
logname Varchar 50 Loginname
pswd Varchar 50 Password
ip Varchar 50 Ipaddress
Table 5: Transaction_details
Field Name Data Type Field Size Description
transactionid Int 4 Transactionid
dateofdownload Varchar 50 Date of download
clientid Varchar 50 Client id
productid Int 4 Product id
expirydate Varchar 50 Expiry date
location Varchar 50 location
SYSTEM IMPLEMENTATION AND TESTING
SYSTEM IMPLEMENTATION
Implementation is key stage in achieving a successful new system, because usually , it involves a lot of upheaval in the user departments. It is the stage of the project where the theoretical design is turned in to a working system. It must therefore be carefully planned and controlled.
An important aspect of the system analyst job is to make sure that the design is implemented to establish standards. Implementation invoices the conversion of basic application to complete replacement of computer system. It is a process of converting a new revised system design in to an operational one. It is simply a translation of the largest abstraction in to physical realization, using language architecture.
Implementation includes all the activities that take place to convert the old system to new system may be totally new, replacing an existing manual or automated system or it may be a proper implementation essential to provide a reliable system to meet organizations equipment.
IMPLEMENTATION ASPECT
Implementation of new computer system to replace an existing one is more difficult conversion. If not properly planned, there can be many problems.
Implementation is a key stage in achieving a successful new system, because it usually involves a lot of upheaval in the user departments. During the phase the product structure, it is undergoing data structures, the general algorithms and interfaces and linkage among the various substructures are established. The algorithms and data structures developed during design based on requirement specifications were converted to running programs. All the java programs were converted to executable versions.

SYSTEM TESTING
System testing is the stage of implementation, which is aimed at ensuring at the system works accurately and efficiently before the live operation commences. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the goal will be successfully achieved. The candidate system is subject to a variety of tests. A series of testing are performed for the proposed system is ready for user acceptance testing.
White Box Testing
This test is conducted during the code generation phase itself. All the errors were rectified at the moment of its discovery. During this testing, it is ensured that
1. All independent paths with in a module have been exercised at least one.
2. Exercise all logical decisions on their or true or false side.
3. Execute all loops at their boundaries.
Black Box Testing
It is focused on the functional requirements of the software. It is not alternative to White Box Testing; rather, it is complementary approach that is likely to uncover a different class of errors than White Box methods.
It is attempted to find errors in the following categories.
Incorrect or missing functions.
Interface errors.
Errors in data structures or external database access.
Performance errors
Testing should begin in small and progress towards testing is large.
Unit Testing
During this testing, the number of arguments is compared to input parameters, matching of parameter and arguments etc. It is also ensured whether the file attributes are correct, whether the Files are opened before using, whether input or output errors are handled etc. Unit Test is conducted using a Test Driver usually.
Integration Testing
Bottom up integration is used for this phase. It begins construction and testing with atomic modules. This strategy is implemented with the following steps.
1. Low level modules are combined to form clusters that perform a specific software sub function.
2. The cluster is tested.
3. clusters are combined moving upward in this program structure.
Alpha Testing
A series of acceptance tests were conducted. Tejas administrator conducted it. The suggestions, along with the additional requirements of the end user were included in the project.
Beta Testing
It is to be conducted by the end-users with out the presence of the developer. It can be conducted over a period of weeks or month. Since it is a long time consuming activity, its result is out of the project report. But its result will help to enhance the product at a later time.

MAINTENANCE
Maintenance is an unavoidable factor, in software engineering. It is the process of managing the changes made in the software. Once installed, the application is used for many years. However, the user may change. Therefore the application will have to be undoubtedly maintained. Modification and change will be made to the software, files or procedures to meet emerging user requirements.
BENEFITS AT A GLANCE
Delivers the information needed to improve efficiency, optimize uptime and increase profit.
Breaks down complex issues into common causes allowing managers to proactively address problems.
Identifies the cause and effects of the production process allowing managers to prioritize maintenance and optimize uptime.
Increases visibility into the operating process and equipment dynamics enabling continuous, incremental improvements to reduce costs and streamline operations.
Automatically captures downtime events to remove the inaccuracies and inconsistencies

CONCLUSION
The software which we aimed to develop, if implemented can work successfully in any client server architecture. The software Downtime analyzer is highly relevant in today s environment. A number of products would be installed in any client system. Downtime Analyzer allows the software to be installed in the client with prior permission from the server. Besides it keeps track of the software life cycle and also alerts the server with appropriate messages or mails when the product reaches its expiry. Downtime Analyzer finds extensive applications particularly in the production and manufacturing areas.
BIBLIOGRAPHY
Text reference
1. Java Script-Don Gosselin
2. System Analysis and Design
3. Software engineering with JAVA
4. SQL Server-The Complete Reference
Web reference
google.com
ampladowntimeanalyser.com
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.