Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Electrical Seminar Abstract And Report 7
#1

Lightning Protection Using LFA-M

Introduction
A new simple, effective and inexpensive method for lightning protection of medium voltage overhead distribution line is using long flashover arresters (LFA). A new long flashover arrester model has been developed. It is designated as LFA-M. It offers great number of technical and economical advantages.

The important feature of this modular long flashover arrester (LFA-M) is that it can be applied for lightning protection of overhead distribution line against both induced overvoltages and direct lightning strokes. The induced over voltages can be counteracted by installing a single arrester on an overhead line support (pole). For the protection of lines against direct lightning strokes, the arresters are connected between the poles and all of the phase conductors in parallel with the insulators.

Lightning is an electrical discharge between cloud and the earth, between clouds or between the charge centers of the same cloud. Lightning is a huge spark and that take place when clouds are charged to at a high potential with respect to earth object (e.g. overhead lines) or neighboring cloud that the dielectric strength of the neighboring medium(air) is destroyed.

TYPES OF LIGHTNING STROKES

There are two main ways in which the lightning may strike the power system . They are
1. Direct stroke
2. Indirect stroke

Direct Stroke
In direct stroke, the lightning discharge is directly from the cloud to the an overhead line. From the line, current path may be over the insulators down to the pole to the ground. The over voltage set up due to the stroke may be large enough to flashover this path directly to the ground. The direct stroke can be of two types

1. stroke A
2. stroke B

In stroke A, the lightning discharge is from the cloud to the subject equipment(e.g. overhead lines). The cloud will induce a charge of opposite sign on the tall object. When the potential between the cloud and line exceed the breakdown value of air, the lightning discharge occurs between the cloud and the line.

In stroke B the lightning discharge occurs on the overhead line as the result of stroke A between the clouds. There are three clouds P,Q and R having positive, negative and positive charge respectively. Charge on the cloud Q is bound by cloud R.If the cloud P shift too nearer to cloud Q,Then lightning discharge will occur between them and charges on both these cloud disappear quickly. The result is that charge on cloud R suddenly become free and it then discharges rapidly to earth, ignoring tall object.
Wideband Sigma Delta PLL Modulator
Wideband Sigma Delta PLL Modulator

Introduction
The proliferation of wireless products over past few years has been rapidly increasing. New wireless standards such as GPRS and HSCSD have brought new challenges to wireless transceiver design. One pivotal component of transceiver is frequency synthesizer. Two major requirements in mobile applications are efficient utilization of frequency spectrum by narrowing the channel spacing and fast switching for high data rates. This can be achieved by using fractional- N PLL architecture. They are capable of synthesizing frequencies at channel spacings less than reference frequency. This will increase the reference frequency and also reduces the PLL's lock time.

Fractional N PLL has the disadvantage that it generates high tones at multiple of channel spacing. Using digital sigma delta modulation techniques. we can randomize the frequency division ratio so that quantization noise of the divider can be transferred to high frequencies thereby eliminatory the spurs.

Conventional PLL

The advantages of this conventional PLL modulator is that they offer small frequency resolution, wider tuning bandwidth and fast switching speed. But they have insufficient bandwidth for current wireless standards such as GSM. so that they cannot be used as a closed loop modulator for digital enhanced codeless (DECT) standard. they efficiently filter out quantization noise and reference feed through for sufficiently small loop bandwidth.

Wide Band PLL

For wider loop band width applications bandwidth is increased. but this will results in residual spurs to occur. this due to the fact that the requirement of the quantization noise to be uniformly distributed is violated. since we are using techniques for frequency synthesis the I/P to the modulator is dc I/P which will results in producing tones even when higher order modulators are used. with single bit O/P level of quantization noise is less but with multi bit O/P s quantization noise increases.

So the range of stability of modulator is reduced which will results in reduction of tuning range. More over the hardware complexity of the modulator is higher than Mash modulator. In this feed back feed forward modulator the loop band width was limited to nearly three orders of magnitudes less than the reference frequency. So if it is to be used as a closed loop modulator power dissipation will increase.

So in order to widen the loop band width the close-in-phase noise must be kept within tolerable levels and also the rise of the quantization noise must be limited to meet high frequency offset phase noise requirements. At low frequencies or dc the modulator transfer function has a zero which will results in addition of phase noise. For that the zero is moved away from dc to a frequency equal to some multiple of fractional division ratio. This will introduce a notch at that frequency which will reduce the total quantization noise. Now the quantization noise of modified modulator is 1.7 times and 4.25 times smaller than Mash modulator.

At higher frequencies quantization noise cause distortion in the response. This is because the step size of multi bit modulator is same as single bit modulator. So more phase distortion will be occurring in multi bit PLLs. To reduce quantization noise at high frequencies the step size is reduced by producing functional division ratios. This is achieved by using a phase selection divider instead of control logic in conventional modulator. This divider will produce phase shifts of VCO signal and changes the division ratio by selecting different phases from the VCO. This type of divider will produce quarter division ratios.
Bioinformatics
Bioinformatics

Introduction
TCP/IP

Rapid advances in bioinformatics are providing new hopes to patients of life threatening diseases. Gene chips will be able to screen heart attack and diabetics years before patients develop symptoms. In near future, patients will go to a doctor's clinic with lab- on- a- chip devices. The device will inform the doctor in real time if the patient's ailment will respond to a drug based on his DNA.

These will help doctors diagnose life-threatening illness faster, eliminating expensive, time-consuming ordeals like biopsies and sigmoidoscopies. Gene chips reclassify diseases based on their underlying molecular signals, rather than misleading surface symptoms. The chip would also confirm the patient's identity and even establish paternity.

Bioinformatics is an inter disciplinary research area. It is a fusion of computing, biotechnology and biological sciences. Bioinformatics is poised to one of the most prodigious growth areas in the next to decades. Being the interface between the most rapidly advancing fields of biological and computational sciences, it is immense in scope and vast in applications.

Bioinformatics is the study of biological information as it passes from its storage site in the genome to the various gene products in the cell. Bioinformatics involves the creation and computational technologies for problems in molecular biology. As such ,it deals with methods for storing, retrieving and analyzing biological data, such as nuclei acid (DNA/RNA)and protein sequence, structures, functions, path ways and interactions. The science of Bioinformatics, which is the melding of molecular biology with computer science is essential to the use of genomic information in understanding human diseases and in the identification of new molecular targets of drug discovery.

New discoveries are being made in the field of genomics, an area of study which looks at the DNA sequence of an organism in order to determine which genes code for beneficial traits and which genes are involved in inherited diseases.If you are not tall enough, the stature could be altered accordingly. If you are weak and not strong enough, your physique could be improved. If you think this is the script for a science fiction movie, you are mistaken. It is the future reality.

Evolution Of Bioinformatics

DNA is the genetic material of organism. It contains all the information needed for the development and existence of an organism. The DNA molecule is formed of two long polynucleotide chains which are spirally coiled on each other forming a double helix. Thus it has the form of spirally twisted ladder. DNA is a molecule made from sugar, phosphate and bases.
The bases are guanine (G), cytosine©adenine(A) and thiamine(T).Adenine pairs only with Thiamine and Guanine pairs only with Cytosine. The various combinations of these bases make up with DNA. That is; AAGCT, CCAGT, TACGGT etc. An infinite number of combinations of these bases is possible. And then the gene is a sequence of DNA that represents a fundamental unit of heredity. Human genome consists of approximately 30,000 genes, containing approximately 3 billion base pairs.
Extreme Ultraviolet Lithography
Extreme Ultraviolet Lithography

Introduction
Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade.

Potential successors to optical projection lithography are being aggressively developed. These are known as "Next-Generation Lithographies" (NGL's). EUV lithography (EUVL) is one of the leading NGL technologies; others include x-ray lithography, ion-beam projection lithography, and electron-beam projection lithography. Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity.

Extreme ultraviolet lithography (EUVL) is an advanced technology for making microprocessors a hundred times more powerful than those made today.

EUVL is one technology vying to replace the optical lithography used to make today's microcircuits. It works by burning intense beams of ultraviolet light that are reflected from a circuit design pattern into a silicon wafer. EUVL is similar to optical lithography in which light is refracted through camera lenses onto the wafer. However, extreme ultraviolet light, operating at a different wavelength, has different properties and must be reflected from mirrors rather than refracted through lenses. The challenge is to build mirrors perfect enough to reflect the light with sufficient precision

EUV RADIATION

We know that Ultraviolet radiations are very shortwave (very low wavelength) with high energy. If we further reduce the wavelength it becomes Extreme Ultraviolet radiation. Current lithography techniques have been pushed just about as far as they can go. They use light in the deep ultraviolet range- at about 248-nanometer wavelengths-to print 150- to 120-nanometer-size features on a chip. (A nanometer is a billionth of a meter.) In the next half dozen years, manufacturers plan to make chips with features measuring from 100 to 70 nanometers, using deep ultraviolet light of 193- and 157-nanometer wavelengths. Beyond that point, smaller features require wavelengths in the extreme ultraviolet (EUV) range. Light at these wavelengths is absorbed instead of transmitted by conventional lenses
Lithography
Lithography

Introduction
Computers have become much more compact and increasingly powerful largely because of lithography, a basically photographic process that allows more and more features to be crammed onto a computer chip.

Lithography is akin to photography in that it uses light to transfer images onto a substrate. Light is directed onto a mask-a sort of stencil of an integrated circuit pattern-and the image of that pattern is then projected onto a semiconductor wafer covered with light-sensitive photoresist. Creating circuits with smaller and smaller features has required using shorter and shorter wavelengths of light.
Animatronics

Molecular Electronics

Cellonics Technology

Cellular Digital Packet Data

CT Scanning

Continuously variable transmission (CVT)

Continuously variable transmission (CVT)

Introduction
After more than a century of research and development, the internal combustion (IC) engine is nearing both perfection and obsolescence: engineers continue to explore the outer limits of IC efficiency and performance, but advancements in fuel economy and emissions have effectively stalled. While many IC vehicles meet Low Emissions Vehicle standards, these will give way to new, stricter government regulations in the very near future. With limited room for improvement, automobile manufacturers have begun full-scale development of alternative power vehicles. Still, manufacturers are loath to scrap a century of development and billions or possibly even trillions of dollars in IC infrastructure, especially for technologies with no history of commercial success. Thus, the ideal interim solution is to further optimize the overall efficiency of IC vehicles.

One potential solution to this fuel economy dilemma is the continuously variable transmission (CVT), an old idea that has only recently become a bastion of hope to automakers. CVTs could potentially allow IC vehicles to meet the first wave of new fuel regulations while development of hybrid electric and fuel cell vehicles continues. Rather than selecting one of four or five gears, a CVT constantly changes its gear ratio to optimize engine efficiency with a perfectly smooth torque-speed curve. This improves both gas mileage and acceleration compared to traditional transmissions. The fundamental theory behind CVTs has undeniable potential, but lax fuel regulations and booming sales in recent years have given manufacturers a sense of complacency: if consumers are buying millions of cars with conventional transmissions, why spend billions to develop and manufacture CVTs?

Although CVTs have been used in automobiles for decades, limited torque capabilities and questionable reliability have inhibited their growth. Today, however, ongoing CVT research has led to ever-more robust transmissions, and thus ever-more-diverse automotive applications. As CVT development continues, manufacturing costs will be further reduced and performance will continue to increase, which will in turn increase the demand for further development. This cycle of improvement will ultimately give CVTs a solid foundation in the world's automotive infrastructure.

CVT Theory & Design

Today's automobiles almost exclusively use either a conventional manual or automatic transmission with "multiple planetary gear sets that use integral clutches and bands to achieve discrete gear ratios" . A typical automatic uses four or five such gears, while a manual normally employs five or six. The continuously variable transmission replaces discrete gear ratios with infinitely adjustable gearing through one of several basic CVT designs.
High-availability power systems: Redundancy options
High-availability power systems: Redundancy options

Introduction
In major applications like major computer installations, process control in chemical plants, safety monitors, IC units of hospitals etc., even a temporary power failure may lead to large economic losses. For such critical loads, it is of paramount importance to use UPS systems. But all UPS equipments should be completely de-energized for preventive maintenance at least once per year. This limits the availability of the power system. Now there are new UPS systems in the market to permit concurrent maintenance.

High-Availability Power Systems

The computing industry talks in terms of "Nines" of availability. This refers to the percentage of time in a year that a system is functional and available to do productive work. A system with four "Nines" is 99.99 percent available, meaning that downtime is less than 53 minutes in a standard 365-day year. Five "Nines" (99.999 percent available) equates to less than 5.3 minutes of downtime per year. Six "Nines" (99.9999 percent available) equates to just 32 seconds of downtime per year. These same numbers apply when we speak of availability of conditioned power. The goal is to maximize the availability of conditioned power and minimize exposure to unconditioned utility power. The concept of continuous availability of conditioned power, takes this concept one step further. After all, 100 percent is greater than 99.99999 percent.

The Road To Continuous Availability
We determine availability by studying four key elements:

o Reliability
The individual UPS modules, static transfer switches and other power distribution equipment must be incredibly reliable, as measured by field-documented MTBF (Mean Time Between Failures). In addition, the system elements must be designed and assembled in a way that minimizes the complexity and single points of failure.

o Functionality
The UPS must be able to protect the critical load from the full range of power disturbances, and only a true double-conversion UPS can do this. Some vendors offer single- conversion (line-interactive) three-phase UPS products as a lower cost alternative. However, these alternative UPS's do not protect against all disturbances, including power system short circuits, frequency variations, harmonics and common mode noise. If your critical facility is truly critical, only a true double conversion UPS is suitable.

o Maintainability
The system design must permit concurrent maintenance of all power system components, supporting the load with part of the UPS system while other parts are being serviced. As we shall see, single bus solutions do not completely support concurrent maintenance.

o Fault Tolerance
The system must have fault resiliency to cope with a failure of any power system component without affecting the operation of the critical load equipment. Furthermore, the power distribution system must have fault resiliency to survive the inevitable load faults and human error. The two factors of field-proven critical bus MTBF in excess of one million hours and double-conversion technology ensure reliability and functionality. With reliability and functionality assured, let us look at how different UPS system configurations compare for maintainability and fault tolerance.
IGCT
IGCT

Introduction
Thyristor technology is inherently superior to transistor for blocking voltage values above 2.5kV, plasma distributions equal to those of diodes offering the best trade-off between the on-state and blocking voltages. Until the introduction of newer power switches, the only serious contenders for high-power transportation systems and other applications were the GTO (thyristor), with its cumbersome snubbers, and the IGBT (transistor), with its inherently high losses. Until now, adding the gate turn-off feature has resulted in GTO being constrained by a variety of unsatisfactory compromises. The widely used standard GTO drive technology results in inhomogenous turn-on and turn-off that call for costly dv/dt and di/dt snubber circuits combined with bulky gate drive units.

Rooting from the GTO is one of the newest power switches, the Gate-Commutated Thyristor (GCT). It successfully combines the best of the thyristor and transistor characteristics, while fulfilling the additional requirements of manufacturability and high reliability. The GCT is a semiconductor based on the GTO structure, whose cathode emitter can be shut off "instantaneously", thereby converting the device from a low conduction-drop thyristor to a low switching loss, high dv/dt bipolar transistor at turn- off.

The IGCT (Integrated GCT) is the combination of the GCT device and a low inductance gate unit. This technology extends transistor switching performance to well above the MW range, with 4.5kV devices capable of turning off 4kA, and 6kV devices capable of turning off 3kA without snubbers. The IGCT represents the optimum combination of low loss thyristor technology and snubberles gate effective turn off for demanding medium and high voltage power electronics applications. The thick line shows the variation of the anode voltage during turn-off. The lighter shows the variation of the anode current during turn-off process of IGCT.

GTO and thyristor are four layer (npnp) devices. As such, they have only two stable points their characteristics-'on' and 'off'. Every state in between is unstable and results in current filamentation. The inherent instability is worsened by processing imperfections. This has led to the widely accepted myth that a GTO cannot be operated without a snubber. Essentially, the GTO has to be reduced to a stable pnp device i.e. a transistor, for the few critical microseconds during turn-off.

To stop the cathode (n) from taking part in the process, the bias of the cathode n-p junction has to be reversed before voltage starts to build up at the main junction. This calls for commutation of the full load current from the cathode (n) to the gate (p) within one microsecond. Thanks to a new housing design, 4000A/us can be achieved with a low cost 20V gate unit. Current filamentation is totally suppressed and the turn-off waveforms and safe operating area are identical to those of a transistor.

IGCT technology brings together the power handling device (GCT) and the device control circuitry (freewheeling diode and gate drive) in an integrated package. By offering four levels of component packaging and integration, it permits simultaneous improvement in four interrelated areas; low switching and conduction losses at medium voltage, simplified circuitry for operating the power semiconductor, reduced power system cost, and enhanced reliability and availability. Also, by providing pre- engineered switch modules, IGCT enables medium-voltage equipment designers to develop their products faster.
Reply

#2
Low Power UART Design for Serial Data Communication

Introduction
With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation.

UART

An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa.

To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software.

UART types

Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side. The 16450 was capable of handling a communication speed of 38.4 kbs without problems.

The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive. To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.
Light Emitting Polymers (LEP)
Light Emitting Polymers (LEP)

Introduction
Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK.

In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry.

Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics.

The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic. LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.
Cruise Control Devices
Cruise Control Devices

Introduction
TCP/IP

Everyday the media brings us the horrible news on road accidents. Once a report said that the damaged property and other costs may equal 3 % of the world's gross domestic product. The concept of assisting driver in longitudinal vehicle control to avoid collisions has been a major focal point of research at many automobile companies and research organizations. The idea of driver assistance was started with the 'cruise control devices' first appeared in 1970's in USA. When switched on, this device takes up the task of the task of accelerating or braking to maintain a constant speed. But it could not consider the other vehicles on the road.

An 'Adaptive Cruise Control' (ACC) system developed as the next generation assisted the driver to keep a safe distance from the vehicle in front. This system is now available only in some luxury cars like Mercedes S-class, Jaguar and Volvo trucks the U.S. Department of transportation and Japan's ACAHSR have started developing 'Intelligent Vehicles' that can communicate with each other with the help of a system called 'Co operative Adaptive Cruise Control' .this paper addresses the concept of Adaptive Cruise Control and its improved versions.

ACC works by detecting the distance and speed of the vehicles ahead by using either a Lidar system or a Radar system [1, 2].The time taken by the transmission and reception is the key of the distance measurement while the shift in frequency of the reflected beam by Doppler Effect is measured to know the speed. According to this, the brake and throttle controls are done to keep the vehicle the vehicle in a safe position with respect to the other.

These systems are characterized by a moderately low level of brake and throttle authority. These are predominantly designed for highway applications with rather homogenous traffic behavior. The second generation of ACC is the Stop and Go Cruise Control (SACC) [2] whose objective is to offer the customer longitudinal support on cruise control at lower speeds down to zero velocity [3]. The SACC can help a driver in situations where all lanes are occupied by vehicles or where it is not possible to set a constant speed or in a frequently stopped and congested traffic [2].

There is a clear distinction between ACC and SACC with respect to stationary targets. The ACC philosophy is that it will be operated in well structured roads with an orderly traffic flow with speed of vehicles around 40km/hour [3]. While SACC system should be able to deal with stationary targets because within its area of operation the system will encounter such objects very frequently.
Boiler Instrumentation and Controls
Boiler Instrumentation and Controls

Introduction
Instrumentation and controls in a boiler plant encompass an enormous range of equipment from simple industrial plant to the complex in the large utility station. The boiler control system is the means by which the balance of energy & mass into and out of the boiler are achieved. Inputs are fuel, combustion air, atomizing air or steam &feed water. Of these, fuel is the major energy input. Combustion air is the major mass input, outputs are steam, flue gas, blowdown, radiation & soot blowing.

CONTROL LOOPS

Boiler control systems contain several variable with interaction occurring among the control loops for fuel, combustion air, & feedwater . The overall system generally can be treated as a series of basic control loops connected together. for safety purposes, fuel addition should be limited by the amount of combustion air and it may need minimum limiting for flame stability.

Combustion controls

Amounts of fuel and air must be carefully regulated to keep excess air within close tolerances-especially over the loads. This is critical to efficient boiler operation no matter what the unit size, type of fuel fired or control system used.

Feedwater control

Industrial boilers are subject to wide load variations and require quick responding control to maintain constant drum level. Multiple element feed water control can help faster and more accurate control response.
Single Photon Emission Computed Tomography (SPECT)
Single Photon Emission Computed Tomography (SPECT)

Introduction
Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system.

Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.
Sensors on 3D Digitization

Asynchronous Chips

Optical packet switch architectures

Digital Audio Broadcasting

Cellular Neural Network (CNN)

FRAM

FRAM

Introduction
Before the 1950's, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAM's, EEPROM's, and FRAM's).

Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute. The term "ferroelectric' indicates the similarity, despite the lack of iron in the materials themselves.

Ferroelectric memory exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demanding many memory write operations. In other word FRAM has the feature of both RAM and ROM. A ferroelectric memory technology consists of a complementry metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors.

A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one or two transistors that provide access to the capacitor or amplify its content for a read operation.A ferroelectric capacitor is different from a regular capacitor in that it substitutes the dielectric with a ferroelectric material (lead zirconate titanate (PZT) is a common material used)-when an electric field is applied and the charges displace from their original position spontaneous polarization occurs and displacement becomes evident in the crystal structure of the material.

Importantly, the displacement does not disappear in the absence of the electric field. Moreover, the direction of polarization can be reversed or reoriented by applying an appropriate electric field.A hysteresis loop for a ferroelectric capacitor displays the total charge on the capacitor as a function of the applied voltage. It behaves similarly to that of a magnetic core, but for the sharp transitions around its coercive points, which implies that even a moderate voltage can disturb the state of the capacitor.

One remedy for this would be to modify a ferroelectric memory cell including a transistor in series with the ferroelectric capacitor. Called an access transistor, it wo control the access to the capacitor and eliminate the need for a square like hysteresis loop compensating for the softness of the hysteresis loop characteristics and blocking unwanted disturb signals from neighboring memory cells.
Wireless Fidelity
Wireless Fidelity

Introduction
Wi-Fi, or Wireless Fidelity is freedom :it allows you to connect to the internet from your couch at home, in a hotel room or a conferance room at work without wires . Wi-Fi is a wireless technology like a cell phone. Wi-Fi enabled computers send and receive data indoors and out; anywhere within the range of a base station. And the best thing of all, it is fast.

However you only have true freedom to be connected any where if your computer is configured with a Wi-Fi CERTIFIED radio (a PC card or similar device). Wi-Fi certification means that you will be able able to connect anywhere there are other Wi-Fi CERTIFIED products - whether you are at home ,office , airports, coffee shops and other public areas equipped with a Wi-Fi access availability.Wi-Fi will be a major face behind hotspots , to a much greater extent.More than 400 airports and hotels in the US are targeted as Wi-Fi hotspots.

The Wi-Fi CERTIFIED logo is your only assurance that the product has met rigorous interoperability testing requirements to assure products from different vendors will work together. The Wi-Fi CERTIFIED logo means that it is a "safe" buy.
Wi-Fi certification comes from the Wi-Fi Alliance, a non profit international trade organisation that tests 802.11 based wireless equipment to make sure that it meets the Wi-Fi standard and works with all other manufacturer's Wi-Fi equipment on the market. The Wi-Fi Alliance (WELA) also has a Wi-Fi certification program for Wi-Fi products that meet interoperability standards. It is an international organisation devoted to certifying interoperability of 802.11 products and to promoting 802.11as the global wireless LAN std across all market segment.

IEE 802.11 ARCHITECTURES

In IEE's proposed standard for wireless LANs (IEE 802.11), there are two different ways to configure a network: ad-hoc and infrastructure. In the ad-hoc network, computers are brought together to form a network "on the fly." As shown in Figure 1, there is no structure to the network; there are no fixed points; and usually every node is able to communicate with every other node. A good example of this is the aforementioned meeting where employees bring laptop computers together to communicate and share design or financial information. Although it seems that order would be difficult to maintain in this type of network, algorithms such as the spokesman election algorithm (SEA) [4] have been designed to "elect" one machine as the base station (master) of the network with the others being slaves. Another algorithm in ad-hoc network architectures uses a broadcast and flooding method to all other nodes to establish who's who.
Power System Contingencies
Power System Contingencies

Introduction
Power system voltage control has a hierarchy structure with three levels: the primary, secondary, and the tertiary voltage control. Over the past 20 yrs, one of the most successful measures proposed to improve power system voltage regulation has been the application of secondary voltage control, initiated by the French electricity company, EDF, and followed by some other electricity utilities in European countries.

The secondary voltage control closes the control loop of the references value setting of controllers at the primary level. The primary objective of secondary voltage control is to achieve better voltage regulation in power systems. In addition, it brings in the extra benefit of improvement of power system voltage stability, for this application, several methods to design secondary voltage controllers have been proposed.

The useful concept of secondary voltage control is explored for a new application-the elimination of the voltage violations in power system contingencies. For this particular application, the coordination of various secondary voltage controllers is proposed to be based on a multi agent request -and- answer type of protocol to between any two agents. The resulted secondary voltage control can only cover the location where voltage controllers are installed. This paper presents results of significant progresses in investigating this new application to eliminate voltage violations in power system contingencies via secondary voltage control.

A collaboration protocol, expressed graphically as finite state machine, is proposed for the coordination among multiple FACTS voltage controllers. The coordinated secondary voltage control is suggested to cover multiple locations to eliminate voltage violations in the adjacent locations to a voltage controller. A novel scheme of a learning fuzzy logic control is proposed for the design of the secondary voltage controller. A key parameter of the learning fuzzy logic controller is proposed to be trained through off-line simulation with the injection of artificial loads in the controller's adjacent locations.

FACTS (Flexible AC Transmission Systems)
Sudden changes in the power demands or changes in the system conditions in the power system are often followed by prolonged electromechanical oscillations leading to power system instability. AC transmission lines are dominantly reactive networks characterized by their per mile series inductance and shunt capacitances. Suitably changing the line impedance and thus the real and reactive power flow through the transmission line is an effective measure for controlling the power system oscillations and thereby improving the system stability.

Advances in high power semiconductors and sophisticated electronic control technologies have led to the development of FACTS. Through FACTS the effective line impedance can be controlled within a few milliseconds time. Damping of the power system oscillation is possible through effective changes in the line impedance by employing FACTS members (SVC, STATCOM, UPFC etc).
Reply

#3

Introduction to Bioinformatics


Secondary structure prediction
History and Context
Chou and Fasman
Garnier-Osguthorpe-Robson
Comparison of Methods
Newer Approaches

Protein Sequence Analysis

Secondary Structure Prediction
The primary structure of proteins is the sequance of amino acids from which it is
constructed.
There are 20 naturally occuring amino acids. All amino acids have a common chemical
structure: a tetrahedral (sp3) carbon atom (C_alpha) to which four asymmetric groups are
connected: an amino group (NH2), a carboxy group (COOH), a H atom and another chemical
group (denoted by R) which varies from one amino acid to another. Two amino acids connect
via a peptide bond to form a poly-peptide structure - the protein. The peptide bond is
formed by a condensation reaction between the amino and carboxy group, which releases a
water molecule and forms a covalent bond between them. Due to a partial double-bond
character, the peptide unit NH-CO is planar and is always in a trans configuration. The
peptide unit together with the C_alpha are termed back-bone and the residue R is termed
side-chain , and is different from one amino acid to another. As the central C_alpha atom
has four different groups connected to it, it is chiral; all naturally occuring amino
acids are L amino acids.

Protein Sequence Analysis

Secondary Structure Prediction
Each amino acid contains an "amine" group
(NH3) and a "carboxy" group (COOH) (shown
in black in the diagram).
The amino acids vary in their side chains
(indicated in blue in the diagram).
The eight amino acids in the orange area
are nonpolar and hydrophobic.
The other amino acids are polar and
hydrophilic ("water loving").
The two amino acids in the magenta box are
acidic ("carboxy" group in the side chain).
The three amino acids in the light blue box
are basic ("amine" group in the side
chain).

For more information about this article,please follow the link:

http://googleurl?sa=t&source=web&cd=1&ve...2Fsdsc.edu%2F gribskov%2Fbimm140%2Flectures%2F2003.ProtStruc.a.pdf&ei=TSu5TNqUBc-TjAfglfmpDg&usg=AFQjCNGixPIzExKkxeA5NYHaZYj4nZ6qrw
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.