Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Bio-molecular Computing
#1

Bio-molecular Computing
.CHAPTER 1
INTRODUCTION
Computer chip manufactures are furiously racing to make the next microprocessor that will topple speed records. Sooner or later, though, this competition is bound to hit a wall. Microprocessor made of silicon will eventually reach their limits of speed and miniaturization. Chip makers need a new material to produce faster computing speeds.
You won t believe where scientists have found the new material they need to build the next generation of microprocessors. Millions of natural supercomputers exist inside living organisms, including your living body. They are nothing else but Bio-Molecules itself. Especially DNA. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many faster than the world s most powerful human-built computers. The other Bio-Molecules like Nucleotides, Nucleotides, Saccharides, Lignin, Lipids, Amino acids
1.1 What is a DNA Computer?
Research in the development of DNA computers is really only at its beginning stages, so a specific answer isn't yet available. But the general sense of such a computational device is to use the DNA molecule as a model for its construction.
Although the feasibility of molecular computers remains in doubt, the field has opened new horizons and important new research problems, both for computer scientists and biologists. The computer scientist and mathematician are looking for new models of computation to replace with acting in a test tube.
The massive parallelism of DNA strands may help to deal with computational problems that are beyond the reach of ordinary digital computers -- not because the DNA strands are smarter, but because they can make many tries at once. It's the parallel nature of the beast. For the biologist, the unexpected results in DNA computing indicate that models of DNA computers could be significant for the study of important biologial problems such as evolution. Also, the techniques of DNA manipulation developed for computational purposes could also find applications in genetic engineering.
DNA computer can t be still found at your local electronics store yet. The technology is still in their development, and didn t exist as concept before a decade. In 1994, LEONARD ADELMAN introduced the idea of using DNA to solve complex mathematical problems. Adelman, computer scientist at the university of Southern California, came to the conclusion that DNA had computational potential after reading the book MOLECULAR BIOLOGY OF THE GENE written by JAMES WASTON, who co-discovered the structure of DNA in 1953.In fact, DNA is more similar to computer. DNA is very similar to a computer hard drive in how it stores permanent information about your genes.
CHAPTER 2
HAMILTON PATH PROBLEM
Adelman is often called the inventor of the DNA computers. His article in a 1994 issue of Journal Science outlined how to use DNA to solve a well-known mathematical problem, called the Directed Hamilton Path problem , also known as the Traveling Salesman Problem . The goal of the problem is to find the shortest route between a numbers of cities, going through each city only once. As you add more cities the problem becomes more difficult.
Figure 2.1 shows a diagram of the Hamilton path problem. The objective is to find a path from start to end going through all the points only once. This problem is difficult for the conventional (serial logic) computers because they try must try each path one at a time. It is like having a whole bunch of keys and trying to see which fits into the lock. Conventional computers are very good at math, but poor at key into lock problems. DNA based computers can try all the keys at the same time (massively parallel) and thus are very good at key into lock problems, but much slower at simple mathematical problems like multiplication. The Hamilton path problem was chosen because every key-into-lock problem can be solved as a Hamilton Path Problem.
The following algorithm solves the Hamilton Path Problem, regardless of the type computers used.
1. Generate random paths through the graph.
2. Keep only those paths that begin with the start city (A) and conclude with the end city(G).
3. Because the graph has 7 cities, keep only those paths with 7 cities.
4. Keep only those paths that enter all cities at least once.
5. Any remaining paths are solutions.
2.1 Solving the problem using DNA
The key to solving the problem using DNA to perform the five steps in solving the above algorithm.
These interconnecting blocks can be used to model DNA:
DNA likes to form long double helices:
The two helices are joined by bases , which will be represented by colored blocks. Each base binds only to one other specific base. In our example, we will say that each colored block will bind only with the block of same color. For example, if we only had red colored blocks, they would form a long chain like this:
Any other color will not bind with red
CHAPTER 3
PROGRAMMING OF THE PROBLEM USING DNA
STEP 1: Create a unique DNA sequence for each city A through G. For each path, for example, from A to B, creates a linking pieces of DNA that matches the last half of A and first half of B:
Here the red block represents the city A, while the orange block represents the city B. the half-red half-orange block connecting the two other blocks represents the path from A to B.
In a test tube, all different pieces of DNA will randomly page link with each other, forming paths through the graph.
STEP 2: Because it is difficult to "remove" DNA from solution, the target DNA, the DNA which started from A and ended at G was copied over and over again until the test tube contained a lot of it relative to other random sequences. This is essentially the same as removing all the other pieces. Imagine a sock drawer which initially contains one or two colored socks. If you put in a hundred black socks, the chances are that all you will get if you reach in is black socks.
STEP 3: Going by weight, the DNA sequences which were 7 "cities" long were separated from the rest. A "sieve" was used which would allow smaller pieces of DNA to pass quickly, while larger segments are slowed down. the procedure used actually allows you to isolate the pieces which are precisely 7 cities long from any shorter or longer paths.
STEP 4: To ensure that the remaining sequences went through each of cities, sticky pieces of DNA attached to magnets were used to separate the DNA. The magnets were used to ensure that the target DNA remained in the test tube, while the unwanted DNA was washed away. First, the magnets kept all the DNA which went through city A in the test tube, then B, then C, and D, and so on. In the end, the only DNA which remained in the tube was that which went through all seven cities.
STEP 5: all that was left to sequences the DNA, revealing the path from A to B to C to D to E to F to G.
CHAPTER 4
WORKING OF DNA
DNA is the major information storage molecule in living cells, and billions of years of evolution have tested and refined both this wonderful informational molecule and highly specific enzymes that can either duplicate the information in DNA molecules or transmit this information to other DNA molecules.
Instead of using electrical impulses to represent bits of information, the DNA computer uses the chemical properties of these molecules by examining the patterns of combination or growth of the molecules or strings. DNA can do this through the manufacture of enzymes, which are biological catalysts that could be called the 'software' used to execute the desired calculation.
DNA computers use deoxyribonucleic acids--A (adenine), C (cytosine), G (guanine) and T (thymine)--as the memory units, and recombinant DNA techniques already in existence carry out the fundamental operations. In a DNA computer, computation takes place in test tubes or on a glass slide coated in 24K gold. The input and output are both strands of DNA, whose genetic sequences encode certain information. A program on a DNA computer is executed as a series of biochemical operations, which have the effect of synthesizing, extracting, modifying and cloning the DNA strands.
The only fundamental difference between conventional computers and DNA computers is the capacity of memory units: electronic computers have two positions (on or off), whereas DNA has four (C, G, A or T). The study of bacteria has shown that restriction enzymes can be employed to cut DNA at a specific word(W). Many restriction enzymes cut the two strands of double-stranded DNA at different positions leaving overhangs of single-stranded DNA. Two pieces of DNA may be rejoined if their terminal overhangs are complementary. Complements are referred to as 'sticky ends'. Using these operations, fragments of DNA may be inserted or deleted from the DNA.
DNA microarray, or DNA chips are fabricated by high-speed robotics, generally on glass but sometimes on nylon substrates, for which probes* with known identity are used to determine complementary binding, thus allowing massively parallel gene expression and gene discovery studies. An experiment with a single DNA chip can provide researchers information on thousands of genes simultaneously - a dramatic increase in throughout.
There are two major application forms for the DNA microarray technology: 1) Identification of sequence (gene / gene mutation); and 2) hainDetermination of expression level (abundance) of genes.
5.3 EFFICIENCY
In both the solid-surface glass-plate approach and the test tube approach, each DNA strand represents one possible answer to the problem that the computer is trying to solve. The strands have been synthesized by combining the building blocks of DNA, called nucleotides, with one another, using techniques developed for biotechnology. The set of DNA strands is manufactured so that all conceivable answers are included. Because a set of strands is tailored to a specific problem, a new set would have to be made for each new problem.
Most electronic computers operate linearly--they manipulate one block of data after another--biochemical reactions are highly in parallel: a single step of biochemical operations can be set up so that it affects trillions of DNA strands. While a DNA computer takes much longer than a normal computer to perform each individual calculation, it performs an enourmous number of operations at a time and requires less energy and space than normal computers. 1000 litres of water could contain DNA with more memory than all the computers ever made, and a pound of DNA would have more computing power than all the computers ever made.
The Restricted model of DNA computing in test tubes is simplified to:
Separate : Isolate a subset of DNA from a sample.
Merge : Pour two test tubes into one to perform union.
Detect : Confirm presence/absence of DNA in a given test tube Despite these restrictions, this model can still solve Hamiltonian Path problems.
Error control can also be achieved mainly through logical operations, such as running all DNA samples showing positive results a second time to reduce false positives. Some molecular proposals, such as using DNA with a peptide back bone for stability, have also been recommended.
Reply

#2
[attachment=14948]
ABSTRACT
BIO MOLECULAR COMPUTING

Biomolecular computing, computations performed by biomolecules , is challenging traditional approaches to computation both theoretically and technologically. Often placed within the wider context of natural or even unconventional computing, the study of natural and artificial molecular computations is adding to our understanding both of biology and computer science well beyond the framework of neuroscience. The papers in this special theme document only a part of an increasing involvement of Europe in this far reaching undertaking. In this introduction, I wish to outline the current scope of the field and assemble some basic arguments that biomolecular computation is of central importance to both computer science and biology. Readers will also find arguments for not dismissing DNA Computing as limited to exhaustive search and for a qualitatively distinctive advantage over all other types of computation including quantum computing.
The idea that molecular systems can perform computations is not new and was indeed more natural in the pre-transistor age. Most computer scientists know of von Neumann s discussions of self-reproducing automata in the late 1940s, some of which were framed in molecular terms. Here the basic issue was that of bootstrapping: can a machine construct a machine more complex than itself?
Important was the idea, appearing less natural in the current age of dichotomy between hardware and software, that the computations of a device can alter the device itself. This vision is natural at the scale of molecular reactions, although it may appear utopic to those running huge chip production facilities. Alan Turing also looked beyond purely symbolic processing to natural bootstrapping mechanisms in his work on self-structuring in molecular and biological systems. Purely chemical computers have been proposed by Ross and Hjelmfelt extending this approach. In biology, the idea of molecular information processing took hold starting from the unraveling of the genetic code and translation machinery and extended to genetic regulation, cellular signaling, protein trafficking, morphogenesis and evolution - all of this independently of the development in the neurosciences. For example, because of the fundamental role of information processing in evolution, and the ability to address these issues on laboratory time scales at the molecular level, I founded the first multi-disciplinary Department of Molecular Information Processing in 1992. In 1994 came Adleman s key experiment demonstrating that the tools of laboratory molecular biology could be used to program computations with DNA in vitro. The huge information storage capacity of DNA and the low energy dissipation of DNA processing lead to an explosion of interest in massively parallel DNA Computing. For serious proponents of the field however, there really never was a question of brute search with DNA solving the problem of an exponential growth in the number of alternative solutions indefinitely. In a new field, one starts with the simplest algorithms and proceeds from there: as a number of contributions and patents have shown, DNA Computing is not limited to simple algorithms or even, as we argue here, to a fixed hardware configuration.
After 1994, universal computation and complexity results for DNA Computing rapidly ensued (recent examples of ongoing projects here are reported in this collection by Rozenberg, and Csuhaj-Varju). The laboratory procedures for manipulating populations of DNA were formalized and new sets of primitive operations proposed: the connection with recombination and so called splicing systems was particularly interesting as it strengthened the view of evolution as a computational process. Essentially, three classes of DNA Computing are now apparent: intramolecular, intermolecular and supramolecular. Cutting across this classification, DNA Computing approaches can be distinguished as either homogeneous (ie well stirred) or spatially structured (including multi-compartment or membrane systems, cellular DNA computing and dataflow like architectures using microstructured flow systems) and as either in vitro (purely chemical) or in vivo (ie inside cellular life forms). Approaches differ in the level of programmability, automation, generality and parallelism (eg SIMD vs MIMD) and whether the emphasis is on achieving new basic operations, new architectures, error tolerance, evolvability or scalability. The Japanese Project lead by Hagiya focuses on intramolecular DNA Computing, constructing programmable state machines in single DNA molecules which operate by means of intramolecular conformational transitions. Intermolecular DNA Computing, of which Adleman's experiment is an example, is still the dominant form, focusing on the hybridization between different DNA molecules as a basic step of computations and this is common to the three projects reported here having an experimental component (McCaskill, Rozenberg and Amos). Beyond Europe, the group of Wisconsin are prominent in exploiting a surface based approach to intermolecular DNA Computing using DNA Chips. Finally, supramolecular DNA Computing, as pioneered by Eric Winfree, harnesses the process of self-assembly of rigid DNA molecules with different sequences to perform computations. The connection with nanomachines and nanosystems is then clear and will become more pervasive in the near future.
Reply

#3
Definition
Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.

DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.

Adleman's Traveling Salesman Problem:
The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem.

The following algorithm solves the Hamiltonian Path problem:
1.Generate random paths through the graph.
2.Keep only those paths that begin with the start city (A) and conclude with the
end city (G).
3.If the graph has n cities, keep only those paths with n cities. (n=7)
4.Keep only those paths that enter all cities at least once.
5.Any remaining paths are solutions.

The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.
Reply

#4
Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.

DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.

Adleman's Traveling Salesman Problem:
The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem.

The following algorithm solves the Hamiltonian Path problem:
1.Generate random paths through the graph.
2.Keep only those paths that begin with the start city (A) and conclude with the
end city (G).
3.If the graph has n cities, keep only those paths with n cities. (n=7)
4.Keep only those paths that enter all cities at least once.
5.Any remaining paths are solutions.

The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.
Reply

#5
Bio-Molecular Computing

Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices.

The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer.

In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.
Reply

#6
to get information about the topic bio molecular computing full report ,ppt and related topic refer the page link bellow

http://seminarsprojects.net/Thread-bio-m...ting--4648

http://seminarsprojects.net/Thread-bioin...-computing

http://seminarsprojects.net/Thread-bio-m...-computing
Reply

#7

to get information about the topic bio molecular computing full report ,ppt and related topic refer the page link bellow

http://seminarsprojects.net/Thread-bio-m...ting--4648

http://seminarsprojects.net/Thread-bioin...-computing

http://seminarsprojects.net/Thread-bio-m...-computing
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.