Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
DNA Computing
#1

DNA Computing
Glossary
DNA
Deoxyribonucleic acid. Molecule that encodes the genetic information of cellu-
lar organisms.
Enzyme
Protein that catalyzes a biochemical reaction.
Nanotechnology
Branch of science and engineering dedicated to the construction of artifacts and
devices at the nanometre scale.
RNA
Ribonucleic acid. Molecule similar to DNA, which helps in the conversion of
genetic information to proteins.
Satisfiability (SAT)
Problem in complexity theory. An instance of the problem is defined by a
Boolean expression with a number of variables, and the problem is to identify
a set of variable assignments that makes the whole expression true.
I Definition of the Subject and Its Importance
DNA computing (or, more generally, biomolecular computing) is a relatively
new field of study that is concerned with the use of biological molecules as
fundamental components of computing devices. It draws on concepts and ex-
pertise from fields as diverse as chemistry, computer science, molecular biology,
physics and mathematics. Although its theoretical history dates back to the
late 1950s, the notion of computing with molecules was only physically realised
in 1994, when Leonard Adleman demonstrated in the laboratory the solution of
a small instance of a well-known problem in combinatorics using standard tools
of molecular biology. Since this initial experiment, interest in DNA computing
has increased dramatically, and it is now a well-established area of research. As
we expand our understanding of how biological and chemical systems process
information, opportunities arise for new applications of molecular devices in
bioinformatics, nanotechnology, engineering, the life sciences and medicine.
II Introduction
In the late 1950s, the physicist Richard Feynman first proposed the idea of using
living cells and molecular complexes to construct sub-microscopic computers.
In his famous talk There s Plenty of Room at the Bottom [18], Feynman
discussed the problem of manipulating and controlling things on a small scale ,
thus founding the field of nanotechnology. Although he concentrated mainly
on information storage and molecular manipulation, Feynman highlighted the
potential for biological systems to act as small-scale information processors:
The biological example of writing information on a small scale has
inspired me to think of something that should be possible. Biology
is not simply writing information; it is doing something about it.
A biological system can be exceedingly small. Many of the cells
are very tiny, but they are very active; they manufacture various
substances; they walk around; they wiggle; and they do all kinds
of marvelous things all on a very small scale. Also, they store
information. Consider the possibility that we too can make a thing
very small which does what we want that we can manufacture an
object that maneuvers at that level! [18].
I Early Work
Since the presentation of Feynman s vision there there has been an steady
growth of interest in performing computations at a molecular level. In 1982,
Charles Bennett [8] proposed the concept of a Brownian computer based
around the principle of reactant molecules touching, reacting, and effecting
state transitions due to their random Brownian motion. Bennett developed
this idea by suggesting that a Brownian Turing Machine could be built from a
macromolecule such as RNA. Hypothetical enzymes , one for each transition
rule, catalyze reactions between the RNA and chemicals in its environment,
transforming the RNA into its logical successor.
In the same year, Conrad and Liberman developed this idea further in [15],
in which the authors describe parallels between physical and computational
processes (for example, biochemical reactions being employed to implement ba-
sic switching circuits). They introduce the concept of molecular level word
processing by describing it in terms of transcription and translation of DNA,
RNA processing, and genetic regulation. However, the paper lacks a detailed
description of the biological mechanisms highlighted and their relationship with
traditional computing. As the authors themselves acknowledge, our aspira-
tion is not to provide definitive answers . . . but rather to show that a number of
seemingly disparate questions must be connected to each other in a fundamental
way. [15]
In [14], Conrad expanded on this work, showing how the information pro-
cessing capabilities of organic molecules may, in theory, be used in place of dig-
ital switching components. Particular enzymes may alter the three-dimensional
structure (or conformation) of other substrate molecules. In doing so, the en-
zyme switches the state of the substrate from one to another. The notion of
conformational computing (q.v.) suggests the possibility of a potentially rich
and powerful computational architecture. Following on from the work of Con-
rad et al., Arkin and Ross show how various logic gates may be constructed
using the computational properties of enzymatic reaction mechanisms [5] (see
Dennis Bray s article [10] for a review of this work). In [10], Bray also describes
work [23, 24] showing how chemical neurons may be constructed to form the
building blocks of logic gates.
II Motivation
We have made huge advances in machine miniaturization since the days of room-
sized computers, and yet the underlying computational framework (the von
Neumann architecture) has remained constant. Today s supercomputers still
employ the kind of sequential logic used by the mechanical dinosaurs of the
1940s [13].
There exist two main barriers to the continued development of traditional ,
silicon-based computers using the von Neumann architecture. One is inherent
to the machine architecture, and the other is imposed by the nature of the un-
derlying computational substrate. A computational substrate may be defined
as a physical substance acted upon by the implementation of a computational
architecture. Before the invention of silicon integrated circuits, the underlying
substrates were bulky and unreliable. Of course, advances in miniaturization
have led to incredible increases in processor speed and memory access time.
However, there is a limit to how far this miniaturization can go. Eventually
chip fabrication will hit a wall imposed by the Heisenberg Uncertainty Prin-
ciple (HUP). When chips are so small that they are composed of components a
few atoms across, quantum effects cause interference. The HUP states that the
act of observing these components affects their behavior. As a consequence, it
becomes impossible to know the exact state of a component without fundamen-
tally changing its state.
The second limitation is known as the von Neumann bottleneck. This is
imposed by the need for the central processing unit (CPU) to transfer instruc-
tions and data to and from the main memory. The route between the CPU and
memory may be visualized as a two-way road connecting two towns. When the
number of cars moving between towns is relatively small, traffic moves quickly.
However, when the number of cars grows, the traffic slows down, and may even
grind to a complete standstill. If we think of the cars as units of information
passing between the CPU and memory, the analogy is complete. Most com-
putation consists of the CPU fetching from memory and then executing one
instruction after another (after also fetching any data required). Often, the
execution of an instruction requires the storage of a result in memory. Thus,
the speed at which data can be transferred between the CPU and memory is a
limiting factor on the speed of the whole computer.
Some researchers are now looking beyond these boundaries and are investi-
gating entirely new computational architectures and substrates. These develop-
ments include quantum computing (q.v.), optical computing (q.v.), nanocom-
puters (q.v.) and bio-molecular computers. In 1994, interest in molecular
computing intensified with the first report of a successful non-trivial molecu-
lar computation. Leonard Adleman of the University of Southern California
effectively founded the field of DNA computing by describing his technique for
performing a massively-parallel random search using strands of DNA [1]. In
what follows we give an in-depth description of Adleman s seminal experiment,
before describing how the field has evolved in the years that followed. First,
though, we must examine more closely the structure of the DNA molecule in
order to understand its suitability as a computational substrate.
II The DNA Molecule
Ever since ancient Greek times, man has suspected that the features of one
generation are passed on to the next. It was not until Mendel s work on garden
peas was recognized [39] that scientists accepted that both parents contribute
material that determines the characteristics of their offspring. In the early 20th
century, it was discovered that chromosomes make up this material. Chemical
analysis of chromosomes revealed that they are composed of both protein and
deoxyribonucleic acid, or DNA. The question was, which substance carries the
genetic information? For many years, scientists favored protein, because of its
greater complexity relative to that of DNA. Nobody believed that a molecule as
simple as DNA, composed of only four subunits (compared to 20 for protein),
could carry complex genetic information.
It was not until the early 1950s that most biologists accepted the evidence
showing that it is in fact DNA that carries the genetic code. However, the
physical structure of the molecule and the hereditary mechanism was still far
from clear.
In 1951, the biologist James Watson moved to Cambridge to work with a
physicist, Francis Crick. Using data collected by Rosalind Franklin and Maurice
Wilkins at King s College, London, they began to decipher the structure of DNA.
They worked with models made out of wire and sheet metal in an attempt to
construct something that fitted the available data. Once satisfied with their
double helix model, they published the paper [43] (also see [42]) that would
eventually earn them (and Wilkins) the Nobel Prize for Physiology or Medicine
in 1962.
Reply

#2
DNA computing

DNA computing is a form of computing which uses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. DNA computing, or, more generally, molecular computing, is a fast developing interdisciplinary area. R&D in this area concerns theory, experiments and applications of DNA computing.
Contents

History

This field was initially developed by Leonard Adleman of the University of Southern California, in 1994[1] . Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have been made and various Turing machines have been proven to be constructible[2] [3].

In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips. [4]. On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announced in the journal Nature that they had constructed a DNA computer[5]. This was coupled with an input and output module and is capable of diagnosing cancerous activity within a cell, and then releasing an anti-cancer drug upon diagnosis.

Capabilities

DNA computing is fundamentally similar to parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once.

For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. But DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation. For example, problems which grow exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines still grow exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical. (Quantum computing, on the other hand, does provide some interesting new capabilities).

DNA computing overlaps with, but is distinct from, DNA nanotechnology. The latter uses the specificity of Watson-Crick basepairing and other DNA properties to make novel structures out of DNA. These structures can be used for DNA computing, but they do not have to be. Additionally, DNA computing can be done without using the types of molecules made possible by DNA nanotechnology (as the above examples show).

Examples

* MAYA II

* Computational Genes

For More Read The Links
1.http://arstechnicareviews/2q00/dna/dna-1.html
2.http://computer.howstuffworksdna-computer.htm
3.http://pages.cpsc.ucalgary.ca/ jacob/Courses/Winter2003/CPSC601-73/Slides/05-DNA-Computing-Apps.pdf
4.http://cs.virginia.edu/ robins/Computing_with_DNA.pdf
5.http://rsarsalabs/node.asp?id=2355
6.http://cs.iitm.ernet ashishc/dna-computing.pdf
7.http://googleurl?q=http://books.googlebooks%3Fq%3Ddna%2Bcomputing%26source%3Dcitation&sa=X&oi=book_group&resnum=21&ct=title&cad=bottom-3results&usg=AFQjCNFFpL8r1uIfeTVL0qGmg_kwFQc4ug
Reply

#3
[attachment=193]
DNA (Deoxyribose Nucleic Acid) computing, also known as molecular computing is a new approach to massively parallel computation based on groundbreaking work by Adleman. DNA computing was proposed as a means of solving a class of intractable computational problems in which the computing time can grow exponentially with problem size (the 'NP-complete' or non-deterministic polynomial time complete problems).A DNA computer is basically a collection of specially selected DNA strands whose combinations will result in the solution to some problem, depending on the problem at hand. Technology is currently available both to select the initial strands and to filter the final solution. Conventional computers use miniature electronic circuits etched on silicon chips to control information represented by electrical impulses. However, this silicon technology is starting to approach the limits of miniaturization, beyond which it will not be possible to make chips more powerful. DNA computing, on the other hand, represents information as a pattern of molecules arranged along a strand of DNA. These molecules can be manipulated, copied and changed by biochemical reactions in predictable ways through the use of enzymes. The appeal of DNA computing lies in the fact that DNA molecules can store far more information than any existing conventional computer chip. It has been estimated that a gram of dried DNA can hold as much information as a trillion CDs. Moreover, in a biochemical reaction taking place in a tiny surface area, hundreds of trillions of DNA molecules should be able to operate in concert, which would create a parallel processing system with the power of the largest current supercomputers. A highly interdisciplinary study, DNA computing is currently one of the fastest growing fields in both Computer Science and Biology, and its future looks extremely promising
Reply

#4
Introduction
This Introductory segment of the seminar report on DNA-based Computing (DNAC) will provide participants with the basic tools necessary to understand current research in DNAC, along with a discussion of potential applications to Robotics and Smart Machines. Following a brief review of DNA structure, an overview of the basic tools from molecular biology utilized for DNAC (e.g., DNA annealing, ligation, polymerization, restriction, PCR, etc.) will be undertaken. A discussion of the major, basic computational architectures of BMC (e.g., Adleman's algorithm for HPP, DNA Chip-based SAT) will then be provided, in each case presenting an animation detailing execution of a simple example. Finally, attention will turn to advanced topics related to robotics and artificial intelligence in DNAC. In particular, a new robotics and smart machine will be presented, which implements a DNAC-inspired semantic model. A discussion of the model and implementation will be undertaken, with attention to both theoretical and chemical points of view.
[attachment=328]
Reply

#5
to get information about the topic dna computing seminar full report ,ppt and related topic refer the page link bellow

http://seminarsprojects.net/Thread-dna-c...t-download

http://seminarsprojects.net/Thread-dna-c...ull-report

http://seminarsprojects.net/Thread-dna-computing

http://seminarsprojects.net/Thread-dna-c...ort?page=4

http://seminarsprojects.net/Thread-dna-c...rity--6866

http://seminarsprojects.net/Thread-dna-c...ort?page=2

http://seminarsprojects.net/Thread-dna-c...ort?page=3

http://seminarsprojects.net/Thread-dna-c...ars-report

http://seminarsprojects.net/Thread-dna-c...?pid=49366

http://seminarsprojects.net/Thread-dna-computing--4455

http://seminarsprojects.net/Thread-dna-based-computing

http://seminarsprojects.net/Thread-dna-c...ort?page=5

http://seminarsprojects.net/Thread-dna-c...oad?page=6
Reply

#6
i need some moredetails and updates on the topic..can u pls help me?
also i m in search of seminar topic related with biology or astronomy..can u pl help me?
pls give me the reply in [email protected]
Reply

#7

read this
http://rapidsharefiles/132814375/DNA_Computing.ppt

http://rapidsharefiles/133062166/DNA_Com...HO_LPF.ppt
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.