We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I've seen a Ted.com talk where the speaker suggests that modern science can create new micro organisms, like bacteria with a pre- defined new set of traits. But has anyone create a new species of plants or animals using synthetic DNA yet? If so, what are their names?
Answering your question
Well, considering there is a plethora of facilities that deal with the cloning of plants and animals, as well as altering specific DNA in order to create a better plant or animal. Tests have been done with rats to try and create stronger and smarter rats, and tests have been done on countless other animals to try to develop immunity to dementia and Alzheimers. In the agricultural industries of the world, new types of corn, tomatoes, rice, and other types of crops are engineered on a daily basis, and new versions of these plants are made very commonly. The problem, however, is that a vast quantity of these plants are rejected from being planted. Some plants that are engineered either taste too bad, or they are less efficient than the plant at which they were modifying. As far as inventing a new species, you have to be more specific as to what you are asking. If science creates a new animal from another animal, it is likely of the same family and genus, like breeding a dog from another dog. Creating another type of mammal or reptile or bacteria that isn't part of a specific genus would be a breakthrough, as I am sure that all types of cloning and scientific pairing have been done to the same general Genus and Species… I am assuming, however, that it will be possible to engineer lifeforms at the molecular level by the next 50 to 100 years, as science and technology is getting more and more advanced. So, by the next few centuries, it would only seem logical to assume that new species would be created. However, the practice of creating new species, or even operating a laboratory/facility that deals with the concept, is illegal in a large number of countries, and has several guidelines in others.
I am only 13, so if any of this information is to be used anywhere else, I would advise not to cite me as a credible source. All this information is from my knowledge, and my knowledge is from school studies and personal readings. This is all my information, though. If any wording sounds similar to another source, keep in mind that one human brain functions similarly to another on several accounts and coincidences are often found. I can assure you that if the documented information above is verified by plagiarism experts as my own wording, as I have taken the liberty to scan my own work. I hope I answered your question, and I am sorry for the long disclaimer ranting of this paragraph.
The answer is no. At this moment (2016) the are only two synthetic organisms that are being constructed or have been constructed.
The first is a synthetic Mycoplasma genitalium.https://en.wikipedia.org/wiki/Mycoplasma_laboratorium Made by venter
The second is synthetic yeast, S. cerevisiae http://syntheticyeast.org/
There has been talk of building a synthetic human genome… but that is only talk.
Tamas Bartfai PhD , Graham V Lees PhD , in The Future of Drug Discovery , 2013
The large biotech companies, which have sales in billions, struggle to make it to the “Big Pharma” class, that is, to have proprietary product line in several disease areas and sales of $20 billion or more per year. It seems hard Amgen, Biogen-Idec, Celgene, Gilead, and Vertex (see Table 7.2 ) seem to be stuck in the $3–15 billion sales and two or maximum three disease areas. The progress of biologicals’ companies like Amgen and Biogen-Idec into small-molecule drug discovery has been slow and erratic. The much-hyped small molecules such as Biogen-Idec’s BG12, a dimethyl fumaric acid—to match Novartis’ small molecule oral multiple sclerosis drug Gilenya—are revivals of very old drugs and represent a small increment in genuine novel research, even though they may turn out successful. The majority of small molecule drugs outside of Big Pharma have come from a handful of biotechs such as Vertex, Aurora, and Gilead, which have built a drug discovery machine that is identical in ability, platforms, and chemistry skills to that of Big Pharma, and is even superior in some aspects. If their promising trials lead to NDAs, and these companies continue on this track, they might be the companies who may fill some of the gaps left by Big Pharma abandoning some important indications. The problem is that these companies already today struggle with financing the phase 3 studies. Nevertheless, it seems that 10–20 publicly traded large biotech companies will survive the patent expiries of beta-interferon, EPO, and so forth, and will remain independent drug developers in the next 10–15 years. If truly profitable, they will inevitably become takeover targets as this year’s takeover of Genzyme by Sanofi showed. These takeovers have so far without exception led to cuts of research. While they add to the short-term financial prowess of the Big Pharma, they globally represent a loss of innovation.
Successful large generic and R&D companies like Teva or the large CRO, R&D, and generic company, Dr Reddy, deserve attention as they use the income of the generic arm to enhance their own R&D to create novel medicines. Their skills in medicinal chemistry, process chemistry, and development may serve them well and their stable income basis permits some R&D projects, but their scope is limited and they tread carefully. It is clear that they will continue to present strong competition in the generic arena, but it is not likely that in the next 15 years they become strong, multitherapeutic area R&D-based companies.
Luigi and Giovanni Aldini - Galvanizing Body of Criminal
In writing Frankenstein, Mary Shelley drew on the science of her time, including the electrical experimentation of Luigi Galvani and his nephew Giovanni Aldini, whose test subjects included the corpses of executed criminals, as shown in this illustration from 1804.
The legend of Faust, which harks back to the biblical magician Simon Magus who battled Saint Peter with magic, provides the touchstone for fears about scientists overstepping the mark and unwittingly unleashing destructive forces. Faust was, of course, the blueprint for the most famous cautionary tale of science dabbling in the creation of life: Mary Shelley’s Frankenstein. The novel, first published anonymously in 1818 with a foreword from Percy Shelley (Mary’s husband, who some suspected was the author), reinvents the Faust myth for the dawning age of science, drawing on the biology of Charles Darwin’s grandfather Erasmus, the chemistry of Humphry Davy, and the electrical physiology of Italy’s Luigi Galvani. Percy Shelley wrote that Erasmus Darwin’s speculations, expounded in such works as Zoonomia, or the Laws of Organic Life (1794), supported the idea that Victor Frankenstein’s reanimation of dead matter was “not of impossible occurrence.” And the popular science lecturer Adam Walker, a friend of chemist Joseph Priestley, wrote that Galvani’s experiments on electrical physiology had demonstrated the “relationship or affinity [of electricity] to the living principle.” Making life was in the air in the early 19th century, and Frankenstein looks, in retrospect, almost inevitable.
Mary Shelley’s doctor and his monstrous creation are now invoked as a knee-jerk response to all new scientific interventions in life. They featured prominently in media coverage of in vitro fertilization (IVF) and cloning (“The Frankenstein myth becomes reality” wrote the New York Times apropos IVF), genetic modification of plant crops (“Frankenfoods”), and now the creation of “synthetic life forms” by synthetic biology (“Frankenbugs”). The message is clear: technology thus labeled is something unnatural and dangerous, and warrants our firm disapproval.
The prehistory of synthetic biology isn’t all Faustian. The apparent inclination of life to spring from lifeless matter stimulated the notion of an animating principle that was pervasive in the world, ready to quicken substances when the circumstances were clement. In this view a property that became known as the “vital force” inhered in the very constituents—the corpuscles, or molecules—of matter, and life appeared by degrees when enough of it accumulated. In his strange story D’Alembert’s Dream (1769) the French philosopher Denis Diderot compares the coherent movement of masses of molecule-like “living points” to a swarm of bees, of the kind that Virgil believed could be conjured from a dead cow. As Diderot’s contemporary, the French naturalist George-Louis Leclerc, the Comte de Buffon, put it,
The life of the whole (animal or vegetable) would seem to be only the result of all the actions, all the separate little lives . . . of each of those active molecules whose life is primitive and apparently indestructible.
Vitalism has often been derided by scientists today as a kind of prescientific superstition, but in fact this kind of provisional hypothesis is precisely what is needed for science to make progress on a difficult problem. By supposing that life was immanent in matter, early scientists were able to naturalize it and distinguish it from a mysterious, God-given agency, and so make it a proper object of scientific study.
The early chemists believed that the secret of life must reside in chemical composition: animating matter was just a question of getting the right blend of ingredients.
Put this way, we should not be surprised that Friedrich Wöhler’s synthesis of urea (a molecule that hitherto only living creatures could produce) from an ammonium salt in 1828 posed no deep threat to vitalism, despite its being often cited as the beginning of the end for the theory. The vital potential of molecules was all a matter of degree so there was no real cause for surprise that a molecule associated with living beings could be made from seemingly inanimate matter. In fact, the dawning appreciation during the 19th century that “organic chemistry”—the science of the mostly carbon-based molecules produced by and constituting living things—was contiguous with the rest of chemistry only deepened the puzzle of what life is, while at the same time reinforcing the view that life was a matter for scientists rather than theologians.
The early chemists believed that the secret of life must reside in chemical composition: animating matter was just a question of getting the right blend of ingredients. In 1835 the French anatomist Félix Dujardin claimed to have made the primordial living substance by crushing microscopic animals to a gelatinous pulp. Four years later the Czech physiologist Jan Purkinje gave this primal substance a name: protoplasm, which was thought to be some kind of protein and to be imbued with the ability to move of its own accord.
In the 1860s Charles Darwin’s erstwhile champion Thomas Henry Huxley claimed to have found this primitive substance, which he alleged to be the “physical basis of life,” the “one kind of matter which is common to all living beings.” He identified this substance with a kind of slime in which sea floor–dwelling organisms seemed to be embedded. The slime contained only the elements carbon, hydrogen, oxygen, and nitrogen, Huxley said. (Actually his protoplasm turned out to be the product of a chemical reaction between seawater and the alcohol used to preserve Huxley’s marine specimens.) Meanwhile, the leading German advocate of Darwinism, Ernst Haeckel, declared that there is a kind of vital force in all matter, right down to the level of atoms and molecules the discovery of molecular organization in liquid crystals in the 1880s seemed to him to vindicate the hypothesis.
Haeckel was at least right to focus on organization. Ever since the German physiologist Theodor Schwann proposed in the mid-19th century that all life is composed of cells, the protoplasm concept was confronted by the need to explain the orderly structure of life: jelly wasn’t enough. Spontaneous generation was finally killed off by the experiments of Louis Pasteur and others showing that sterile mixtures remained that way if they were sealed to prevent access to the microorganisms that Pasteur had identified under the microscope. But vitalism didn’t die in the process, instead mutating into the notion of “organic organization”—the mysterious propensity of living beings to acquire structure and coordination among their component molecular parts, which biologists began to discern when they inspected cells under the microscope. In other words, the organization of life apparent at the visible scale extended not only to the cellular level but beyond. The notion of a universal protoplasm, meanwhile, became untenable once the diversity of life’s molecular components, in particular the range of protein enzymes, became apparent through chemical analysis in the early 20th century.
Primal jelly had a notable swan song. In 1899 the Boston Herald carried the headline “Creation of Life . . . Lower Animals Produced by Chemical Means.” Setting aside the possibly tongue-in-cheek corollary adduced in the headline “Immaculate Conception Explained,” the newspaper described the research of German physiologist Jacques Loeb, who was working at the center for marine biology in Woods Hole, Massachusetts. Loeb had, in fact, done nothing so remarkable he had shown that an unfertilized sea-urchin egg could be induced to undergo parthenogenesis, dividing and developing, by exposure to certain salts. Loeb’s broader vision, however, set the stage for an interview in 1902 that reports him as saying,
I wanted to take life in my hands and play with it. I wanted to handle it in my laboratory as I would any other chemical reaction—to start it, stop it, vary it, study it under every condition, to direct it at my will!
Loeb’s words sound almost like the deranged dream of a Hollywood mad scientist. Scientific American even dubbed Loeb “the Scientific Frankenstein.” Needless to say, Loeb was never able to do anything of the sort but his framing of controlling life through an engineering perspective proved prescient, and was most prominently put forward in his book The Mechanistic Conception of Life (1912).
In the previous studies, researchers have used genetic elements to develop a variety of genetic circuits, which can be widely applied to cellular regulation process. However, it was constrained in the simple assembly of gene-regulatory parts or modules . Thus, recent research trends in this field have focused on the development of a predictable and quantitative mode [6, 7]. With the deepening of research, more and more cellular machines were developed for controlling gene expression, such as some regulatory motifs including genetic switches, oscillators, amplifiers, promoters, and repressors .
Cellular regulatory mechanisms of genetic circuits
Cellular regulation covers a broad range, including transcriptional level, post-transcriptional level, and post-translational level (Fig. 1) [9,10,11]. Approaches for gene expression at the transcriptional level mainly include synthetic promoter libraries [12, 13], modular system, transcription machinery engineering [14, 15], and transcription factor . These approaches have been wildly used in theory and applications to design and optimize biological systems. Keasling et al. held the opinion that some structural elements for post-transcriptional control can influence protein expression based on a particular mRNA sequence . For example, riboswitches are genetic switches regulating at post-transcriptional levels, which usually exist in the untranslated region of metabolic gene mRNA. Riboswitches possess the abilities of sensing small-molecule metabolites and binding to them, thus alter the secondary structure of RNA to regulate the expression of the corresponding metabolic genes. Hence, riboswitches can be used to design new molecular biosensors . For example, the expression of reporter genes can be regulated by riboswitches to convert enzymatic signals to more detectable ones . Furthermore, riboswitches can also be integrated into more complex gene circuits to achieve regulatory effects .
Relation between transcription regulation, post-transcription regulation, and post-translation regulation
In addition to genetic switches, more complex genetic switch systems were also developed to program and control the desired electrical output. For instance, a genetic toggle switch was built in Escherichia coli inspired by the idea of electronic engineering. As a synthetic and bi-stable gene-regulatory network, the genetic toggle switch is composed of two repressible promoters arranged in a mutually inhibitory network (P1 and P2) and two repressors (R1 and R2) (Fig. 2a). Each promoter is inhibited by the repressor, which is transcribed by the opposing promoter. It presents a near-perfect switching threshold under the fast conversion between stable states using transient chemical or thermal induction . In general, as a practical device, the toggle switch forms a synthetic and addressable cellular memory unit, and has great influence in biotechnology, biocomputing, and gene therapy.
a GFP: green fluorescent protein. Toggle switches possess two repressors (R1 and R2) and two promoters (P1 and P2). R1 is transcribed by P2 and can inhibit P1. R2 is transcribed by P1 and can inhibit P2. In addition, R1 is inducted by Inducer1 and R2 is inducted by Inducer2. The transcriptional states can be flipped by adding inducers. b LacI inhibits the transcription of TetR, and then TetR inhibits the expression of CI. Finally, CI inhibits LacI expression, LacI inhibits the transcription of TetR, and TetR inhibits the expression of CI and GFP
Downstream gene expression can be controlled by placing the proteins’ binding domains within promoter regions. Elowitz and Leibler constructed an oscillating network in E. coli with three transcriptional repressor proteins: LacI (the earliest model of gene regulation under the control of lac operon) from E. coli, TetR (another model of gene repression) from the tetracycline-resistance transposon Tn10 and CI (a common transcription factor function as a toggle switch) from λ phage (Fig. 2b). LacI inhibits the transcription of TetR, and then TetR inhibits the expression of λCI. Finally, λCI inhibits the function of LacI, which constructs a harmony system with mutual restraint. For a visual readout of the status in individual cells, the green fluorescent protein (GFP) is also added in the system and induced periodically. Due to the tardiness of the generated oscillations than the cell-division cycle with typical periods of hours, the status of the oscillator has to be transmitted from generation to generation . Such “rational network design” may not only lead to the engineering of new cellular behaviors, but also improve the understanding of naturally occurring networks.
The application of genetic circuits
With the rapid development of synthetic biology over the last several decades, fine-tuning of gene expression has been applied to many organisms and heterologous systems in metabolic engineering and other synthetic biology systems [23,24,25,26]. In general, to improve the tailored metabolite production of industrial interest like biofuels or organic acids, etc., the designed or redesigned metabolic pathways have become emphasis in microbes . Researchers have modulated various biomanufacturing-related metabolic pathways originating from different sources, and assembled them in the model organism to obtain suitable biosynthetic pathways. The reconstructed microbes possess increased efficiency of metabolic pathways, which will increase the final product titer, yield, and productivity (TYP), and thus reduce the cost on large-scale production.
For instance, RNA switches have been successfully applied to regulate gene expression and modulate metabolic flux in yeasts [28, 29]. For the purpose of decreasing by-product synthesis, fine-tuning of GPP1 (glycerol- l -phosphatephos-phohydrolase1) and PDC (pyruvate decarboxylase) expression levels were implemented, which are responsible for the production of glycerol and ethanol. Chen et al. constructed two RNA switches to bind different target mRNA: sRNA-RHR2 (tetracycline-responsive GPP1 regulator) and sRNA-PDC6 (theophylline-responsive PDC regulator). The final strain possessed decreased enzyme activities (28.3 and 48.4%) and by-product production (91.9 and 59.5%), respectively. Furthermore, the RNA switches increased fumaric acid production from 28.6 to 33.1 g/L using Saccharomyces cerevisiae . These results demonstrated that the insertion of synthetic RNA switches was able to repress the by-product formation without burdening the host cell system. Moreover, RNA switches can be modified to recognize new small molecules with different specificities and mechanisms using other selection strategies. In other work, ligand-responsive RNA switches based on post-transcriptional control were developed in S. cerevisiae for the purpose of constructing high-throughput enzyme evolution platform .
It is well known that permanent knock-out of undesired genes has a positive effect on improving the titer and yield of the target product. However, deletion of genes related to the cell growth could affect the growth rate, perhaps results in cell death. An alternate approach is to turn these genes off after cells growth reached certain levels, and then inhibit gene expression . As the precursor of isopropanol, acetyl-CoA can be converted to citric acid catalyzed by citrate synthase encoded by gltA gene. However, if deletion of gltA gene occurred in E. coli, the bacterial growth will stop. Thus, a metabolic toggle switch (MTS) was developed by Soma et al. for the purpose of inhibiting gltA expression together with keeping good strains growth. After introduction of the gltA OFF switch, the expression of gltA was turned off and the carbon flux was redirected to isopropanol synthesis, resulting in more than threefold improvement . Several years later, Soma et al. optimized the MTS approach and overexpressed pyruvate oxidase encoded by poxB and acetyl-CoA synthase encoded by acs, which are responsible for the acetyl-COA synthesis. Promoter PLlacO1 controls the expression of poxB and acs genes, while promoter PLtetO1 controls the repression of TetR. Then, metabolic influx into the TCA cycle could be interrupted. At the same time, isopropanol synthesis was enhanced . These developments illustrate that genetic circuits have tremendous potential for constructing various biological systems with a broad range of practical applications.
Riboswitches are considered as useful tools for monitoring various metabolites due to the ability of sensing specific molecule metabolites and binding to them. As an in vivo metabolite sensor, riboswitch is called RNA biosensor, which can regulates gene expression by changing their conformation upon binding of specific molecules. To enhance the productivity and yield of naringenin, a riboswitch was applied to detect and monitor intracellular or extracellular naringenin. Jang et al. constructed a riboswitch plasmid library and then introduced two in vivo selection routes, which were able to adjust the operational ranges of the riboswitch. Finally, the selected naringenin riboswitch can respond to their ligands faster and eliminate off-target effects . Moreover, an artificial l -tryptophan riboswitch was used to activate gene expression. When adding 1 g/L l -tryptophan, the gene was up-regulated by 1.58-fold compared with no l -tryptophan was added .
The other application of genetic circuits is bioremediation. The current environment and ecosystem are greatly suffering from the modernization and industrialization. To deal with this issue properly, the environmental monitoring and remediation systems should be developed urgently . Based on synthetic biology technologies, some advanced biosensors are expected to break down the target molecules [38,39,40]. Genetic switches can assist programing cells in sensing the multitudinous signals and putting forward some advantageous responses during the complex and uncertain environment . In particular, biosensors fused with synthetic biology technologies show an outstanding performance among the ongoing approaches developed for bioremediation owning to the complement of both laboratory-based and field analytical methods for environmental monitoring. For instance, mercury is widely circulated in industrial processes including material processing, mining, and coal combustion, which damage the water source and food chain seriously [42, 43]. Given this problem, an engineered strain was constructed to sense and sequester Hg 2+ ions by integrating a mercury-responsive transcriptional regulator (MerR regulator). In addition, this mercury sensor circuit contains cell-surface displayed heavy metal-binding metallothioneins and Hg 2+ transportation system with the goal of remediating polluted water. When perceiving the presence of mercury, the MerR repressor will change the conformation and bind to Hg 2+ , followed by mercury sequestration . Along with the technical progress, more advanced engineered biosensors may enable the monitor sensors to act as bioreactors to break down target molecules . In general, genetic circuits could be designed to enable the host organisms to act as biosensors and bioreactors, thus to sense and break down environmental pollutants. Undoubtedly, synthetic biology will be a powerful tool to dramatically reduce the environmental pollution in the future.
Synthetic biology currently has no generally accepted definition. Here are a few examples:
- "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms" 
- "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits" 
- "designing and constructing biological modules, biological systems, and biological machines or, re-design of existing biological systems for useful purposes" 
- “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005) This can include the possibility of a molecular assembler, based upon biomolecular systems such as the ribosome
Synthetic biology has traditionally been divided into two different approaches: top down and bottom up.
- The top down approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.
- The bottom up approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components,  often with the aim of constructing an artificial cell.
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed,    as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,  and engineering communication between living and synthetic cell populations. 
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.  He also noted this term in another publication, La Biologie Synthétique in 1912. 
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components. 
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology. 
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:
The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated. 
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.  This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.  
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight.  These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.
2003: Researchers engineer an artemisinin precursor pathway in E. coli. 
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.
2005: Researchers develop a light-sensing circuit in E. coli.  Another group designs circuits capable of multicellular pattern formation. 
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells. 
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0.   The genome is made from chemically-synthesized DNA using yeast recombination.
2011: Functional synthetic chromosome arms are engineered in yeast. 
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.  This technology greatly simplified and expanded eukaryotic gene editing.
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.  
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.  
Engineers view biology as a technology (in other words, a given system includes biotechnology or its biological engineering)  Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems (see Biomedical Engineering) and our environment. 
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering, metabolic engineering. [ citation needed ]
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium.
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.  
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up In order to provide engineered surrogates that are easier to comprehend, control and manipulate.  Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.  Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).
DNA and gene synthesis Edit
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.  In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers.  In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years.  In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks.  In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.   
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.  Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.   ) This favors a synthesis-from-scratch approach.
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".  While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.  Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking.   
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms. 
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.   It is widely employed in screening assays. 
The most used  : 22–23 standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003.  Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition.  : 22–23
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils,  SH3 domain-peptide binding  or SpyTag/SpyCatcher  offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization. 
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.  
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.   
Synthetic transcription factors Edit
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.  Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. 
Biological computers Edit
A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms,  and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.   In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.  Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.  Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells.  In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. 
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri,  which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus.  One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce.  Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP). 
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used. 
Cell transformation Edit
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin. 
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving.  Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.  
By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization. 
Designed proteins Edit
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide.  A similar protein structure was generated to support a variety of oxidoreductase activities  while another formed a structurally and sequentially novel ATPase.  Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine these receptors are known as DREADDs.  Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar. 
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required. 
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.  For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid.  One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used. 
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.  The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties . for industrially important biochemical production". 
Designed nucleic acid systems Edit
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.  A similar project encoded the complete sonnets of William Shakespeare in DNA.  More generally, algorithms such as NUPACK,  ViennaRNA,  Ribosome Binding Site Calculator,  Cello,  and Non-Repetitive Parts Calculator  enables the design of new genetic systems.
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times they did not generate mRNA or proteins able to use the artificial nucleotides.   
Space exploration Edit
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.    On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.  Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops. 
Synthetic life Edit
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.  In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. 
A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate.  Nobody has been able to create such a cell. 
A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells.  The host cells were able to grow and replicate.   The Mycoplasma laboratorium is the only living organism with completely engineered genome.
The first living organism with 'artificial' expanded DNA code was presented in 2014 the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. 
In May 2019, researchers, in a milestone effort, reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.  
In 2017 the international Build-a-Cell large-scale research collaboration for the construction of synthetic living cell was started,  followed by national synthetic cell organizations in several countries, including FabriCell,  MaxSynBio  and BaSyC.  The European synthetic cell efforts were unified in 2019 as SynCellEU initiative. 
Drug delivery platforms Edit
Engineered bacteria-based platform Edit
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size.  Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2  and a synthetic adhesin.  The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria.  The bacteria then only release target therapeutic molecules to the tumor through either lysis  or the bacterial secretion system.  Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.
Cell-based platform Edit
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA. [ citation needed ]
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.  Mechanisms can more finely control the system and stop and reactivate it.   Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics. 
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.  
Common ethical questions include:
- Is it morally right to tamper with nature?
- Is one playing God when creating new life?
- What happens if a synthetic organism accidentally escapes?
- What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?
- Who will have control of and access to the products of synthetic biology?
- Who will gain from these innovations? Investors? Medical patients? Industrial farmers?
- Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans? 
- What if a new creation is deserving of moral or legal status?
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.  Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.  
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies. where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.". 
The "creation" of life Edit
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.  Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. Should such life be given moral or legal rights? If so, how?
Biosafety and biocontainment Edit
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.
In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA.   Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,   however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions. 
European Union Edit
The European Union-funded project SYNBIOSAFE  has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.   The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox).  The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity. 
COSY, another European initiative, focuses on public perception and communication.    To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009. 
The International Association Synthetic Biology has proposed self-regulation.  This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry". 
United States Edit
In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology. 
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology". 
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology.  The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.  It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. 
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health . and any potential environmental impact".  These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission . in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science . for educating the public". 
On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment and the ETC Group issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome.   Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations". 
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.   For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.  Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.  
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.   Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.  
What’s the beef?
In the shorter term, with more basic cultured meat products predicted to be ready by the turn of the decade, a bigger question may be whether people are ready to eat the stuff. Will consumers drink synthetic milk and eat lab-grown meat, or will they be put off? Genetically modified (GM) foods, for example, are still mistrusted by many.
Organisations such as the Modern Agriculture Foundation are already preparing the ground for the arrival of in vitro meat, educating people about why we need it. The Foundation’s director, Shaked Regev, believes that cultured meat won’t have the same problem that existing meat alternatives face because it is so similar. “It’s the real deal – you can’t differentiate this from traditional meat under a microscope,” he says.
Polls suggest there’s a willingness to give this modern meat a go. One survey of the Dutch population indicated that 63 per cent of people were in favour of the concept of cultured beef, and 52 per cent were willing to try it. Another survey by The Guardian found that 69 per cent of people wanted to try cultured meat. Whether people reach for the cultured burgers week in, week out at the supermarket is a different matter entirely, though.
People will always be extremely sensitive about what is on their plate. Despite the welfare and environmental justifications for cultured meat, the thought of your burger coming from a lab rather than a farm is a strange idea. But if artificial meat lives up to its promise and becomes the environmentally friendly, safer, cheaper, and even tastier way to eat meat, the concept of raising animals in their millions for slaughter could very quickly seem much stranger.
This is an extract from The Artificial Meat Factory in issue 298 of BBC Focus magazine – don’t miss out on the full feature by subscribing here.
The Application of Synthetic Biology to Human Health and Medicine
Synthetic biology is a new interdisciplinary subject established in bioinformatics, DNA synthetic technology, genetics, and systems biology. Synthetic biology is the rational and systematic design/construction of biological systems with desired functionality. One of its most powerful tools is DNA synthesis technology recently, the cost of gene synthesis has dropped 10 fold over the past 15 years, leading to a boom in the development of synthetic biology. As we understand more about gene synthesis and its applications, the benefits of synthetic biology could reach a wide variety of different fields, including medicine, agriculture, drug development, and bioengineering.
The application of synthetic biology to human health
Synthetic biology technology has potential uses in clinical treatments that can synthesize gene circuits to screen for pathogenic gene or structure variants in diseased animal models. Currently, scientists have synthesized a number of gene circuits in mammalian cells, potentially leading to the treatment and outright prevention of many genetic diseases in humans. Synthetic biology can also lead to treatment for metabolic disorders. For example, Dean et al. incorporated a synthetic gene circuit encoding the glyco-oxylate shunt pathway into mice liver cells, resulting in increased fatty acid oxidation.
The application of synthetic biology to medicine
Synthetic biology is useful to drug screening and discovery, and can be used to discover new drug targeting sites. With the rapid development of synthetic biology, new tools in bioinformatics can be used to analyze potential drug targets. Computational biology and new technology can rapidly identify true protein coding sequences from DNA sequence data, providing accurate predictions of coding vs noncoding sequences.
Synbio Technologies is a DNA technology company that specializes in synthetic biology research. We can artificially design and engineer biological systems and living organisms for the purposes of improving applications for industry or biological research. Synbio Technologies has its own professional synthetic biology platform to provide integrated solutions for all of our customers’ synthetic biology research.
FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church
Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?
Topics discussed in this episode include:
- Existential risk
- Computational substrates and AGI
- Genetics and aging
- Risks of synthetic biology
- Obstacles to space colonization
- Great Filters, consciousness, and eliminating suffering
3:58 What are the most important issues in the world?
12:20 Collective intelligence, AI, and the evolution of computational systems
33:06 Where we are with genetics
38:20 Timeline on progress for anti-aging technology
39:29 Synthetic biology risk
46:19 George’s thoughts on COVID-19
49:44 Obstacles to overcome for space colonization
56:36 Possibilities for “Great Filters”
59:57 Genetic engineering for combating climate change
01:02:00 George’s thoughts on the topic of “consciousness”
01:08:40 Using genetic engineering to phase out voluntary suffering
01:12:17 Where to find and follow George
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloud, iTunes, Google Play and Stitcher.
Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Professor George Church on existential risk, the evolution of computational systems, synthetic-bio risk, aging, space colonization, and more. We’re skipping the AI Alignment Podcast episode this month, but I intend to have it resume again on the 15th of June. Some quick announcements for those unaware, there is currently a live survey that you can take about the FLI and AI Alignment Podcasts. And that’s a great way to voice your opinion about the podcast, help direct its evolution, and provide feedback for me. You can find a link for that survey on the page for this podcast or in the description section of wherever you might be listening.
The Future of Life Institute is also in the middle of its search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Institute award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs to the person who invited the nomination winner, and so on. You can find details about that on the page.
George Church is Professor of Genetics at Harvard Medical School and Professor of Health Sciences and Technology at Harvard and MIT. He is Director of the U.S. Department of Energy Technology Center and Director of the National Institutes of Health Center of Excellence in Genomic Science. George leads Synthetic Biology at the Wyss Institute, where he oversees the directed evolution of molecules, polymers, and whole genomes to create new tools with applications in regenerative medicine and bio-production of chemicals. He helped initiate the Human Genome Project in 1984 and the Personal Genome Project in 2005. George invented the broadly applied concepts of molecular multiplexing and tags, homologous recombination methods, and array DNA synthesizers. His many innovations have been the basis for a number of companies including Editas, focused on gene therapy, Gen9bio, focused on Synthetic DNA, and Veritas Genetics, which is focused on full human genome sequencing. And with that, let’s get into our conversation with George Church.
So I just want to start off here with a little bit of a bigger picture about what you care about most and see as the most important issues today.
George Church: Well, there’s two categories of importance. One are things that are very common and so affect many people. And then there are things that are very rare but very impactful nevertheless. Those are my two top categories. They weren’t when I was younger. I didn’t consider either of them that seriously. So examples of very common things are age-related diseases, infectious diseases. They can affect all 7.7 billion of us. Then on the rare end would be things that could wipe out all humans or all civilization or all living things, asteroids, supervolcanoes, solar flares, and engineered or costly natural pandemics. So those are things that I think are very important problems. Then we have had the research to enhance wellness and minimize those catastrophes. The third category or somewhat related to those two which is things we can do to say get us off the planet, so things would be highly preventative from total failure.
Lucas Perry: So in terms of these three categories, how do you see the current allocation of resources worldwide and how would you prioritize spending resources on these issues?
George Church: Well the current allocation of resources is very different from the allocations that I would set for my own research goals and what I would set for the world if I were in charge, in that there’s a tendency to be reactive rather than preventative. And this applies to both therapeutics versus preventatives and the same thing for environmental and social issues. All of those, we feel like it somehow makes sense or is more cost-effective, but I think it’s an illusion. It’s far more cost-effective to do many things preventatively. So, for example, if we had preventatively had a system of extensive testing for pathogens, we could probably save the world trillions of dollars on one disease alone with COVID-19. I think the same thing is true for global warming. A little bit of preventative environmental engineering for example in the Arctic where relatively few people would be directly engaged, could save us disastrous outcomes down the road.
So I think we’re prioritizing a very tiny fraction for these things. Aging and preventative medicine is maybe a percent of the NIH budget, and each institute sets aside about a percent to 5% on preventative measures. Gene therapy is another one. Orphan drugs, very expensive therapies, millions of dollars per dose versus genetic counseling which is now in the low hundreds, soon will be double digit dollars per lifetime.
Lucas Perry: So in this first category of very common widespread issues, do you have any other things there that you would add on besides aging? Like aging seems to be the kind of thing in culture where it’s recognized as an inevitability so it’s not put on the list of top 10 causes of death. But lots of people who care about longevity and science and technology and are avant-garde on these things would put aging at the top because they’re ambitious about reducing it or solving aging. So are there other things that you would add to that very common widespread list, or would it just be things from the top 10 causes of mortality?
George Church: Well infection was the other one that I included in the original list in common diseases. Infectious diseases are not so common in the wealthiest parts of the world, but they are still quite common worldwide, HIV, TB, malaria are still quite common, millions of people dying per year. Nutrition is another one that tends to be more common in the four parts of the world that still results in death. So the top three would be aging-related.
And even if you’re not interested in longevity and even if you believe that aging is natural, in fact some people think that infectious diseases and nutritional deficiencies are natural. But putting that aside, if we’re attacking age-related diseases, we can use preventative medicine and aging insights into reducing those. So even if you want to neglect longevity that’s unnatural, if you want to address heart disease, strokes, lung disease, falling down, infectious disease, all of those things might be more easily addressed by aging studies and therapies and preventions than by a frontal assault on each micro disease one at a time.
Lucas Perry: And in terms of the second category, existential risk, if you were to rank order the likelihood and importance of these existential and global catastrophic risks, how would you do so?
George Church: Well you can rank their probability based on past records. So, we have some records of supervolcanoes, solar activity, and asteroids. So that’s one way of calculating probability. And then you can also calculate the impact. So it’s a product, the probability and impact for the various kinds of recorded events. I mean I think they’re similar enough that I’m not sure I would rank order those three.
And then pandemics, whether natural or human-influenced, probably a little more common than those first three. And then climate change. There are historic records but it’s not clear that they’re predictive. The probability of an asteroid hitting probably is not influenced by human presence, but climate change probably is and so you’d need a different model for that. But I would say that that is maybe the most likely of the lot for having an impact.
Lucas Perry: Okay. The Future of Life Institute, the things that we’re primarily concerned about in terms of this existential risk category would be the risks from artificial general intelligence and superintelligence, also synthetic bio-risk coming up in the 21st century more and more, and then accidental nuclear war would also be very bad, maybe not an existential risk. That’s arguable. Those are sort of our central concerns in terms of the existential risk category.
Relatedly the Future of Life Institute sees itself as a part of the effective altruism community which when ranking global priorities, they have four areas of essential consideration for impact. The first is global poverty. The second is animal suffering. And the third is long-term future and existential risk issues, having to do mainly with anthropogenic existential risks. The fourth one is meta-effective altruism. So I don’t want to include that. They also tend to make the same ranking, being that mainly the long-term risks of advanced artificial intelligence are basically the key issues that they’re worried about.
How do you feel about these perspectives or would you change anything?
George Church: My feeling is that natural intelligence is ahead of artificial intelligence and will stay there for quite a while, partly because synthetic biology has a steeper slope and I’m including the enhanced natural intelligence in the synthetic biology. That has a steeper upward slope than totally inorganic computing now. But we can lump those together. We can say artificial intelligence writ large to include anything that our ancestors didn’t have in terms of intelligence, which could include enhancing our own intelligence. And I think especially should include corporate behavior. Corporate behavior is a kind of intelligence which is not natural, is wide spread, and it is likely to change, mutate, evolve very rapidly, faster than human generation times, probably faster than machine generation times.
Nukes I think are aging and maybe are less attractive as a defense mechanism. I think they’re being replaced by intelligence, artificial or otherwise, or collective and synthetic biology. I mean I think that if you wanted to have mutually assured destruction, it would be more cost-effective to do that with syn-bio. But I would still keep it on the list.
So I agree with that list. I’d just like nuanced changes to where the puck is likely to be going.
Lucas Perry: I see. So taking into account and reflecting on how technological change in the short to medium term will influence how one might want to rank these risks.
George Church: Yeah. I mean I just think that a collective human enhanced intelligence is going to be much more disruptive potentially than AI is. That’s just a guess. And I think that nukes will just be part of a collection of threatening things that people do. Probably it’s more threatening to cause collapse of a electric grid or a pandemic or some other economic crash than nukes.
Lucas Perry: That’s quite interesting and is very different than the story that I have in my head, and I think will also be very different than the story that many listeners have in their heads. Could you expand and unpack your timelines and beliefs about why you think theat collective organic intelligence will be ahead of AI? Could you say, I guess, when you would expect AI to surpass collective bio intelligence and some of the reasons again for why?
George Church: Well, I don’t actually expect silicon-based intelligence to ever bypass in every category. I think it’s already super good at storage retrieval and math. But that’s subject to change. And I think part of the assumptions have been that we’ve been looking at a Moore’s law projection while most people haven’t been looking at the synthetic biology equivalent and haven’t noticed that the Moore’s law might finally be plateauing, at least as it was originally defined. So that’s part of the reason I think for the excessive optimism, if you will, about artificial intelligence.
Lucas Perry: The Moore’s law thing has to do with hardware and computation, right?
George Church: Yeah.
Lucas Perry: That doesn’t say anything about how algorithmic efficiency and techniques and tools are changing, and the access to big data. Something we’ve talked about on this podcast before is that many of the biggest insights and jumps in deep learning and neural nets haven’t come from new techniques but have come from more massive and massive amounts of compute on data.
George Church: Agree, but those data are also available to humans as big data. I think maybe the compromise here is that it’s some hybrid system. I’m just saying that humans plus big data plus silicon-based computers, even if they stay flat in hardware is going to win over either one of them separately. So maybe what I’m advocating is hybrid systems. Just like in your brain you have different parts of your brain that have different capabilities and functionality. In a hybrid system we would have the wisdom of crowds, plus compute engines, plus big data, but available to all the parts of the collective brain.
Lucas Perry: I see. So it’s kind of like, I don’t know if this is still true, but I think at least at some point it was true, that the best teams at chess were AIs plus humans?
George Church: Correct, yeah. I think that’s still true. But I think it will become even more true if we start altering human brains, which we have a tendency to try to do already via education and caffeine and things like that. But there’s really no particular limit to that.
Lucas Perry: I think one of the things that you said was that you don’t think that AI alone will ever be better than biological intelligence in all ways.
George Church: Partly because biological intelligence is a moving target. The first assumption was that the hardware would keep improving on Moore’s law, which it isn’t. The second assumption was that we would not alter biological intelligence. There’s one moving target which was silicon and biology was not moving, when in fact biology is moving at a steeper slope both in terms of hardware and algorithms and everything else and we’re just beginning to see that. So I think that when you consider both of those, it at least sows the seed of uncertainty as to whether AI is inevitably better than a hybrid system.
Lucas Perry: Okay. So let me just share the kind of story that I have in my head and then you can say why it might be wrong. AI researchers have been super wrong about predicting how easy it would be to make progress on AI in the past. So taking predictions with many grains of salt, if you interview say the top 100 AI researchers in the world, they’ll give a 50% probability of there being artificial general intelligence by 2050. That could be very wrong. But they gave like a 90% probability of there being artificial general intelligence by the end of the century.
And the story in my head says that I expect there to be bioengineering and genetic engineering continuing. I expect there to be designer babies. I expect there to be enhancements to human beings further and further on as we get into the century in increasing capacity and quality. But there are computational and substrate differences between computers and biological intelligence like the clock speed of computers can be much higher. They can compute much faster. And then also there’s this idea about the computational architectures in biological intelligences not being privileged or only uniquely available to biological organisms such that whatever the things that we think are really good or skillful or they give biological intelligences a big edge on computers could simply be replicated in computers.
And then there is an ease of mass manufacturing compute and then emulating those systems on computers such that the dominant and preferable form of computation in the future will not be on biological wetware but will be on silicon. And for that reason at some point there’ll just be a really big competitive advantage for the dominant form of compute and intelligence and life on the planet to be silicon based rather than biological based. What is your reaction to that?
George Church: You very nicely summarized what I think is a dominant worldview of people that are thinking about the future, and I’m happy to give a counterpoint. I’m not super opinionated but I think it’s worthy of considering both because the reason we’re thinking about the future is we don’t want to be blind sighted by it. And this could be happening very quickly by the way because both revolutions are ongoing as is the merger.
Now clock speed, my guess is that clock speed may not be quite as important as energy economy. And that’s not to say that both systems, let’s call them bio and non-bio, can’t optimize energy. But if you look back at sort of the history of evolution on earth, the fastest clock speeds, like bacteria and fruit flies, aren’t necessarily more successful in any sense than humans. They might have more bio mass, but I think humans are the only species with our slow clock speed relative to bacteria that are capable of protecting all of the species by taking us to a new planet.
And clock speed is only important if you’re in a direct competition in a fairly stable environment where the fastest bacteria win. But worldwide most of the bacteria are actually very slow growers. If you look at energy consumption right now, which both of them can improve, there are biological compute systems that are arguably a million times more energy-efficient at even tasks where the biological system wasn’t designed or evolved for that task, but it can kind of match. Now there are other things where it’s hard to compare, either because of the intrinsic advantage that either the bio or the non-bio system has, but where they are sort of on the same framework, it takes 100 kilowatts of power to run say Jeopardy! and Go on a computer and the humans that are competing are using considerably less than that, depending on how you calculate all the things that is required to support the 20 watt brain.
Lucas Perry: What do you think the order of efficiency difference is?
George Church: I think it’s a million fold right now. And this largely a hardware thing. I mean there is algorithmic components that will be important. But I think that one of the advantages that bio chemical systems have is that they are intrinsically atomically precise. While Moore’s law seem to be plateauing somewhere around 3 nanometer fabrication resolution, that’s off by maybe a thousand fold from atomic resolution. So that’s one thing, that as you go out many years, they will either be converging on or merging in some ways so that you get the advantages of atomic precision, the advantages of low energy and so forth. So that’s why I think that we’re moving towards a slightly more molecular future. It may not be recognizable as either our silicon von Neumann or other computers, nor totally recognizable as a society of humans.
Lucas Perry: So is your view that we won’t reach artificial general intelligence like the kind of thing which can reason about as well as about humans across all the domains that humans are able to reason? We won’t reach that on non-bio methods of computation first?
George Church: No, I think that we will have AGI in a number of different substrates, mechanical, silicon, quantum computing. Various substrates will be able of doing artificial general intelligence. It’s just that the ones that do it in a most economic way will be the ones that we will tend to use. There’ll be some cute museum that will have a collection of all the different ways, like the tinker toy computer that did Tic Tac Toe. Well, that’s in a museum somewhere next to Danny Hillis, but we’re not going to be using that for AGI. And I think there’ll be a series of artifacts like that, that in practice it will be very pragmatic collection of things that make economic sense.
So just for example, its easier to make a copy of a biological brain. Now that’s one thing that appears to be an advantage to non-bio computers right now, is you can make a copy of even large data sets for a fairly small expenditure of time, cost, and energy. While, to educate a child takes decades and in the end you don’t have anything totally resembling the parents and teachers. I think that’s subject to change. For example, we have now storage of data in DNA form, which is about a million times denser than any comprable non-chemical, non-biological system, and you can make a copy of it for hundreds of joules of energy and pennies. So you can hold an exabyte of data in the palm of your hand and you can make a copy of it relatively easy.
Now that’s not a mature technology, but it shows where we’re going. If we’re talking 100 years, there’s no particular reason why you couldn’t have that embedded in your brain and input and output to it. And by the way, the cost of copying that is very close to the thermodynamic limit for making copies of bits, while computers are nowhere near that. They’re off by a factor of a million.
Lucas Perry: Let’s see if I get this right. Your view is that there is this computational energy economy benefit. There is this precisional element which is of benefit, and that because there are advantages to biological computation, we will want to merge the best aspects of biological computation with non-biological in order to sort of get best of both worlds. So while there may be many different AGIs on offer on different substrates, the future looks like hybrids.
George Church: Correct. And it’s even possible that silicon is not in the mix. I’m not predicting that it’s not in the mix. I’m just saying it’s possible. It’s possible that an atomically precise computer is better at quantum computing or is better at clock time or energy.
Lucas Perry: All right. So I do have a question later about this kind of thing and space exploration and reducing existential risk via further colonization which I do want to get into later. I guess I don’t have too much more to say about our different stories around here. I think that what you’re saying is super interesting and challenging in very interesting ways. I guess the only thing I would have to say is I guess I don’t know enough, but you said that the computation energy economy is like a million fold more efficient.
George Church: That’s for copying bits, for DNA. For doing complex tasks for example, Go, Jeopardy! or Einstein’s Mirabilis, those kinds of things were typically competing a 20 watt brain plus support structure with a 100 kilowatt computer. And I would say at least in the case of Einstein’s 1905 we win, even though we lose at Go and Jeopardy!, which is another interesting thing, is that humans have a great deal more of variability. And if you take the extreme values like one person in one year, Einstein in 1905 as the representative rather than the average person and the average year for that person, well, if you make two computers, they are going to likely be nearly identical, which is both a plus and a minus in this case. Now if you make Einstein in 1905 the average for humans, then you have a completely different set of goalpost for the AGI than just being able to pass a basic Turing test where you’re simulating someone of average human interest and intelligence.
Lucas Perry: Okay. So two things from my end then. First is, do you expect AGI to first come from purely non-biological silicon-based systems? And then the second thing is no matter what the system is, do you still see the AI alignment problem as the central risk from artificial general intelligence and superintelligence, which is just aligning AIs with human values and goals and intentions?
George Church: I think the further we get from human intelligence, the harder it is to convince ourselves that we can educate, and whereas the better they will be at fooling us. It doesn’t mean they’re more intelligent than us. It’s just they’re alien. It’s like a wolf can fool us when we’re out in the woods.
Lucas Perry: Yeah.
George Church: So I think that exceptional humans are as hard to guarantee that we really understand their ethics. So if you have someone who is a sociopath or high functioning autistic, we don’t really know after 20 years of ethics education whether they actually are thinking about it the same way we are, or even in compatible way to the way that we are. We being in this case neurotypicals, although I’m not sure I am one. But anyway.
I think that this becomes a big problem with AGI, and it may actually put a damper on it. Part of the assumption so far is we won’t change humans because we have to get ethics approval for changing humans. But we’re increasingly getting ethics approval for changing humans. I mean gene therapies are now approved and increasing rapidly, all kinds of neuro-interfaces and so forth. So I think that that will change.
Meanwhile, the silicon-based AGI as we approached it, it will change in the opposite direction. It will be harder and harder to get approval to do manipulations in those systems, partly because there’s risk, and partly because there’s sympathy for the systems. Right now there’s very little sympathy for them. But as you got to the point where computers haven an AGI level of say IQ of 70 or something like that for a severely mentally disabled person so it can pass the Turing test, then they should start getting the rights of a disabled person. And once they have the rights of a disabled person, that would include the right to not be unplugged and the right to vote. And then that creates a whole bunch of problems that we won’t want to address, except as academic exercises or museum specimens that we can say, hey, 50 years ago we created this artificial general intelligence, just like we went to the Moon once. They’d be stunts more than practical demonstrations because they will have rights and because it will represent risks that will not be true for enhanced human societies.
So I think more and more we’re going to be investing in enhanced human societies and less and less in the uncertain silicon-based. That’s just a guess. It’s based not on technology but on social criteria.
Lucas Perry: I think that it depends what kind of ethics and wisdom that we’ll have at that point in time. Generally I think that we may not want to take conventional human notions of personhood and apply them to things where it might not make sense. Like if you have a system that doesn’t mind being shut off, but it can be restarted, why is it so unethical to shut it off? Or if the shutting off of it doesn’t make it suffer, suffering may be some sort of high level criteria.
George Church: By the same token you can make human beings that don’t mind being shut off. That won’t change our ethics much I don’t think. And you could also make computers that do mind being shut off, so you’ll have this continuum on both sides. And I think we will have sympathetic rules, but combined with the risk, which is the risk that they can hurt you, the risk that if you don’t treat them with respect, they will be more likely to hurt you, the risk that you’re hurting them without knowing it. For example, if you have somebody with locked-in syndrome, you could say, “Oh, they’re just a vegetable,” or you could say, “They’re actually feeling more pain than I am because they have no agency, they have no ability to control their situation.”
So I think creating computers that could have the moral equivalent of locked-in syndrome or some other pain without the ability to announce their pain could be very troubling to us. And we would only overcome it if that were a solution to an existential problem or had some gigantic economic benefit. I’ve already called that into question.
Lucas Perry: So then, in terms of the first AGI, do you have a particular substrate that you imagine that coming online on?
George Church: My guess is it will probably be very close to what we have right now. As you said, it’s going to be algorithms and databases and things like that. And it will be probably at first a stunt, in the same sense that Go and Jeopardy! are stunts. It’s not clear that those are economically important. A computer that could pass the Turing test, it will make a nice chat bots and phone answering machines and things like that. But beyond that it may not change our world, unless we solve energy issues and so. So I think to answer your question, we’re so close to it now that it might be based on an extrapolation of current systems.
Quantum computing I think is maybe a more special case thing. Just because it’s good at encryption, encryption is very societal utility. I haven’t yet seen encryption described as something that’s mission critical for space flight or curing diseases, other than the social components of those. And quantum simulation may be beaten by building actual quantum systems. So for example, atomically precise systems that you can build with synthetic biology are quantum systems that are extraordinarily hard to predict, but they’re very easy to synthesize and measure.
Lucas Perry: Is your view here that if the first AGI is on the economic and computational scale of a supercomputer such that we imagine that we’re still just leveraging really, really big amounts of data and we haven’t made extremely efficient advancements and algorithms such that the efficiency jumps a lot but rather the current trends continue and it’s just more and more data and maybe some algorithmic improvements, that the first system is just really big and clunky and expensive, and then that thing can self-recursively try to make itself cheaper, and then that the direction that that would move in would be increasingly creating hardware which has synthetic bio components.
George Church: Yeah, I’d think that that already exists in a certain sense. We have a hybrid system that is self-correcting, self-improving at an alarming rate. But it is a hybrid system. In fact, it’s such a complex hybrid system that you can’t point to a room where it can make a copy of itself. You can’t even point to a building, possibly not even a state where you can make a copy of this self-modifying system because it involves humans, it involves all kinds of fab labs scattered around the globe.
We could set a goal to be able to do that, but I would argue we’re much closer to achieving that goal with a human being. You can have a room where you only can make a copy of a human, and if that is augmentable, that human can also make computers. Admittedly it would be a very primitive computer if you restricted that human to primitive supplies and a single room. But anyway, I think that’s the direction we’re going. And we’re going to have to get good at doing things in confined spaces because we’re not going to be able to easily duplicate planet Earth, probably going to have to make a smaller version of it and send it off and how big that is we can discuss later.
Lucas Perry: All right. Cool. This is quite perspective shifting and interesting, and I will want to think about this more in general going forward. I want to spend just a few minutes on this next question. I think it’ll just help give listeners a bit of overview. You’ve talked about it in other places. But I’m generally interested in getting a sense of where we currently stand with the science of genetics in terms of reading and interpreting human genomes, and what we can expect on the short to medium term horizon in human genetic and biological sciences for health and longevity?
George Church: Right. The short version is that we have gotten many factors of 10 improvement in speed, cost, accuracy, and interpretability, 10 million fold reduction in price from $3 billion for a poor quality genomic non-clinical quality sort of half a genome in that each of us have two genomes, one from each parent. So we’ve gone from $3 billion to $300. It will probably be $100 by the middle of year, and then will keep dropping. There’s no particular second law of thermodynamics or Heisenberg stopping us, at least for another million fold. That’s where we are in terms of technically being able to read and for that matter write DNA.
But the interpretation certainly there are genes that we don’t know what they do, there are disease that we don’t know what causes them. There’s a great vast amount of ignorance. But that ignorance may not be as impactful as sometimes we think. It’s often said that common diseases or so called complex multi-genic diseases are off in the future. But I would reframe that slightly for everyone’s consideration, that many of these common diseases are diseases of aging. Not all of them but many, many of them that we care about. And it could be that attacking aging as a specific research program may be more effective than trying to list all the millions of small genetic changes that has small phenotypic effects on these complex diseases.
So that’s another aspect of the interpretation where we don’t necessarily have to get super good at so called polygenic risk scores. We will. We are getting better at it, but it could be in the end a lot of the things that we got so excited about precision medicine, and I’ve been one of the champions of precision medicine since before it was called that. But precision medicine has a potential flaw in it, which is it’s the tendency to work on the reactive cures for specific cancers and inherited diseases and so forth when the preventative form of it which could be quite generic and less personalized might be more cost-effective and humane.
So for example, taking inherited diseases, we have a million to multi-million dollars spent on people having inherited diseases per individual, while a $100 genetic diagnosis could be used to prevent that. And generic solutions like aging reversal or aging prevention might stop cancer more effectively than trying to stop it once it gets to metastatic stage, which there is a great deal of resources put into that. That’s my update on where genomics is. There’s a lot more that could be said.
Yeah. As a complete lay person in terms of biological sciences, stopping aging to me sounds like repairing and cleaning up human DNA and the human genome such that information that is lost over time is repaired. Correct me if I’m wrong or explain a little bit about what the solution to aging might look like.
George Church: I think there’s two kind of closer related schools of thought which one is that there’s damage that you need to go in there and fix the way you would fix a pothole. And the other is that there’s regulation that informs the system how to fix itself. I believe in both. I tend to focus on the second one.
If you take a very young cell, say a fetal cell. It has a tendency to repair much better than an 80-year-old adult cell. The immune system of a toddler is much more capable than that of a 90-year-old. This isn’t necessarily due to damage. This is due to the epigenetic so called regulation of the system. So one cell is convinced that it’s young. I’m going to use some anthropomorphic terms here. So you can take an 80-year-old cell, actually up to 100 years is now done, reprogram it into an embryo like state through for example Yamanaka factors named after Shinya Yamanaka. And that reprogramming resets many, not all, of the features such that it now behaves like a young non-senescent cell. While you might have taken it from a 100-year-old fibroblast that would only replicate a few times before it senesced and died.
Things like that seem to convince us that aging is reversible and you don’t have to micromanage it. You don’t have to go in there and sequence the genome and find every bit of damage and repair it. The cell will repair itself.
Now there are some things like if you delete a gene it’s gone unless you have a copy of it, in which case you could copy it over. But those cells will probably die off. And the same thing happens in the germline when you’re passing from parent to kid, those sorts of things that can happen and the process of weeding them out is not terribly humane right now.
Lucas Perry: Do you have a sense or timelines on progress of aging throughout the century?
George Church: There’s been a lot of wishful thinking for centuries on this topic. But I think we have a wildly different scenario now, partly because this exponential improvement in technologies, reading and writing DNA and the list goes on and on in cell biology and so forth. So I think we suddenly have a great deal of knowledge of causes of aging and ways to manipulate those to reverse it. And I think these are all exponentials and we’re going to act on them very shortly.
We already are seeing some aging drugs, small molecules that are in clinical trials. My lab just published a combination gene therapy that will hit five different diseases of aging in mice and now it’s in clinical trials in dogs and then hopefully in a couple of years it will be in clinical trials in humans.
We’re not talking about centuries here. We’re talking about the sort of time that it takes to get things through clinical trails, which is about a decade. And a lot of stuff going on in parallel which then after one decade of parallel trials would be merging into combined trials. So a couple of decades.
Lucas Perry: All right. So I’m going to get in trouble in here if I don’t talk to you about synthetic bio risk. So, let’s pivot into that. What are your views and perspectives on the dangers to human civilization that an increasingly widespread and more advanced science of synthetic biology will pose?
George Church: I think it’s a significant risk. Getting back to the very beginning of our conversation, I think it’s probably one of the most significant existential risks. And I think that preventing it is not as easy as nukes. Not that nukes are easy, but it’s harder. Partly because it’s becoming cheaper and the information is becoming more widespread.
But it is possible. Part of it depends on having many more positive societally altruistic do gooders than do bad. It would be helpful if we could also make a big impact on poverty and diseases associated poverty and psychiatric disorders. The kind of thing that causes unrest and causes dissatisfaction is what tips the balance where one rare individual or a small team will do something that otherwise it would be unthinkable for even them. But if they’re sociopaths or they are representing a disadvantaged category of people then they feel justified.
So we have to get at some of those core things. It would also be helpful if we were more isolated. Right now we are very well mixed pot, which puts us both at risk for natural, as well as engineered diseases. So if some of us lived in sealed environments on Earth that are very similar to the sealed environments that we would need in space, that would both prepare us for going into space. And some of them would actually be in space. And so the further we are away from the mayhem of our wonderful current society, the better. If we had a significant fraction of population that was isolated, either on earth or elsewhere, it would lower the risk of all of us dying.
Lucas Perry: That makes sense. What are your intuitions about the offense/defense balance on synthetic bio risk? Like if we have 95% to 98% synthetic bio do gooders and a small percentage of malevolent actors or actors who want more power, how do you see the relative strength and weakness of offense versus defense?
George Church: I think as usual it’s a little easier to do offense. It can go back and forth. Certainly it seems easier to defend yourself from a ICBM than from something that could be spread in a cough. And we’re seeing that in spades right now. I think the fraction of white hats versus black hats is much better than 98% and it has to be. It has to be more like a billion to one. And even then it’s very risky. But yeah, it’s not easy to protect.
Now you can do surveillance so that you can restrict research as best you can, but it’s a numbers game. It’s combination of removing incentives, adding strong surveillance, whistleblowers that are not fearful of false positives. The suspicious package in the airport should be something you look at, even though most of them are not actually bombs. We should tolerate a very high rate of false positives. But yes, surveillance is not something we’re super good at it. It falls in the category of preventative medicine. And we would far prefer to do reactive, is to wait until somebody releases some pathogen and then say, “Oh, yeah, yeah, we can prevent that from happening again in the future.”
Lucas Perry: Is there a opportunity for boosting or beefing a human immune system or a public early warning detection systems of powerful and deadly synthetic bio agents?
George Church: Well so, yes is the simple answer. If we boost our immune systems in a public way — which it almost would have to be, there’d be much discussion about how to do that — then pathogens that get around those boosts might become more common. In terms of surveillance, I proposed in 2004 that we had an opportunity and still do of doing surveillance on all synthetic DNA. I think that really should be 100% worldwide. Right now it’s 80% or so. That is relatively inexpensive to fully implement. I mean the fact that we’ve done 80% already closer to this.
Lucas Perry: Yeah. So, funny enough I was actually just about to ask you about that paper that I think you’re referencing. So in 2004 you wrote A Synthetic Biohazard Non-proliferation Proposal, in anticipation of a growing dual use risk of synthetic biology, which proposed in part the sale and registry of certain synthesis machines to verified researchers. If you were to write a similar proposal today, are there some base elements of it you would consider including, especially since the ability to conduct synthetic biology research has vastly proliferated since then? And just generally, are you comfortable with the current governance of dual use research?
George Church: I probably would not change that 2004 white paper very much. Amazingly the world has not changed that much. There still are a very limited number of chemistries and devices and companies, so that’s a bottleneck which you can regulate and is being regulated by the International Gene Synthesis Consortium, IGSC. I did advocate back then and I’m still advocating that we get closer to an international agreement. Two sectors generally in the United Nations have said casually that they would be in favor of that, but we need essentially every level from the UN all the way down to local governments.
There’s really very little pushback today. There was some pushback back in 2004 where the company’s lawyers felt that they would be responsible or there would be an invasion of privacy of their customers. But I think eventually the rationale of high risk avoidance won out, so now it’s just a matter of getting full compliance.
One of these unfortunate things that the better you are at avoiding an existential risk, the less people know about it. In fact, we did so well on Y2K makes it uncertain as to whether we needed to do anything about Y2K at all, and I think hopefully the same thing will be true for a number of disasters that we avoid without most of the population even knowing how close we were.
Lucas Perry: So the main surveillance intervention here would be heavy monitoring and regulation and tracking of the synthesis machines? And then also a watch dog organization which would inspect the products of said machines?
George Church: Correct.
Lucas Perry: Okay.
George Church: Right now most of the DNA is ordered. You’ll send on the internet your order. They’ll send back the DNA. Those same principles have to apply to desktop devices. It has to get some kind of approval to show that you are qualified to make a particular DNA before the machine will make that DNA. And it has to be protected against hardware and software hacking which is a challenge. But again, it’s a numbers game.
Lucas Perry: So on the topic of biological risk, we’re currently in the context of the COVID-19 pandemic. What do you think humanity should take as lessons from COVID-19?
George Church: Well, I think the big one is testing. Testing is probably the fastest way out of it right now. The geographical locations that have pulled out of it fastest were the ones that were best at testing and isolation. If your testing is good enough, you don’t even have to have very good contact tracing, but that’s also valuable. The longer shots are cures and vaccines and those are not entirely necessary and they are long-term and uncertain. There’s no guarantee that we will come up with a cure or a vaccine. For example, HIV, TB and malaria do not have great vaccines, and most of them don’t have great stable cures. HIV is a full series of cures over time. But not even cures. They’re more maintenance, management.
I sincerely hope that coronavirus is not in that category of HIV, TB, and malaria. But we can’t do public health based on hopes alone. So testing. I’ve been requesting a bio weather map and working towards improving the technology to do so since around 2002, which was before the SARS 2003, as part of the inspiration for the personal genome project, was this bold idea of bio weather map. We should be at least as interested in what biology is doing geographically as we are about what the low pressure fronts are doing geographically. It could be extremely inexpensive, certainly relative to the multi-trillion dollar cost for one disease.
Lucas Perry: So given the ongoing pandemic, what has COVID-19 demonstrated about human global systems in relation to existential and global catastrophic risk?
George Church: I think it’s a dramatic demonstration that we’re more fragile than we would like to believe. It’s a demonstration that we tend to be more reactive than proactive or preventative. And it’s a demonstration that we’re heterogeneous. That there are geographical reasons and political systems that are better prepared. And I would say at this point the United States is probably among the least prepared, and that was predictable by people who thought about this in advance. Hopefully we will be adequately prepared that we will not emerge from this as a third world nation. But that is still a possibility.
I think it’s extremely important to make our human systems, especially global systems more resilient. It would be nice to take as examples the countries that did the best or even towns that did the best. For example, the towns of Vo, Italy and I think Bolinas, California, and try to spread that out to the regions that did the worst. Just by isolation and testing, you can eliminate it. That sort of thing is something that we should have worldwide. To make the human systems more resilient we can alter our bodies, but I think very effective is altering our social structures so that we are testing more frequently, we’re constantly monitoring both zoonotic sources and testing bushmeat and all the places where we’re getting too close to the animals. But also testing our cities and all the environments that humans are in so that we have a higher probability of seeing patient zero before they become a patient.
Lucas Perry: The last category that you brought up at the very beginning of this podcast was preventative measures and part of that was not having all of our eggs in the same basket. That has to do with say Mars colonization or colonization of other moons which are perhaps more habitable and then eventually to Alpha Centauri and beyond. So with advanced biology and advanced artificial intelligence, we’ll have better tools and information for successful space colonization. What do you see as the main obstacles to overcome for colonizing the solar system and beyond?
George Church: So we’ll start with the solar system. Most of the solar system is not pleasant compared to Earth. It’s a vacuum and it’s cold, including Mars and many of the moons. There are moons that have more water, more liquid water than Earth, but it requires some drilling to get down to it typically. There’s radiation. There’s low gravity. And we’re not adaptive.
So we might have to do some biological changes. They aren’t necessarily germline but they’ll be the equivalent. There are things that you could do. You can simulate gravity with centrifuges and you can simulate the radiation protection we have on earth with magnetic fields and thick shielding, equivalent of 10 meters of water or dirt. But there will be a tendency to try to solve those problems. There’ll be issues of infectious disease, which ones we want to bring with us and which ones we want to quarantine away from. That’s an opportunity more than a uniquely space related problem.
A lot of the barriers I think are biological. We need to practice building colonies. Right now we have never had a completely recycled human system. We have completely recycled plant and animal systems but none that are humans, and that is partly having to do with social issues, hygiene and eating practices and so forth. I think that can be done, but it should be tested on Earth because the consequences of failure on a moon or non-earth planet is much more severe than if you test it out on Earth. We should have thousands, possibly millions of little space colonies on Earth since one of my pet projects is making that so that it’s economically feasible on Earth. Only by heavy testing at that scale will we find the real gotchas and failure modes.
And then final barrier, which is more in the category that people think about is the economies of, if you do the physics calculation how much energy it takes to raise a kilogram into orbit or out of orbit, it’s much, much less than the cost per kilogram, orders of magnitude than what we currently do. So there’s some opportunity for improvement there. So that’s in the solar system.
Outside of the solar system let’s say Proxima B, Alpha Centauri and things of that range, there’s nothing particularly interesting between here and there, although there’s nothing to stop us from occupying the vacuum of space. To get to four and a half light years either requires a revolution in propulsion and sustainability in a very small container, or a revolution in the size of the container that we’re sending.
So, one pet project that I’m working on is trying to make a nanogram size object that would contain the information sufficiently for building a civilization or at least building a communication device that’s much easier to accelerate and decelerate a nanogram than it is to do any of the scale of space probes we currently use.
Lucas Perry: Many of the issues that human beings will face within the solar system and beyond machines or synthetic computation that exist today seems more robust towards. Again, there are the things which you’ve already talked about like the computational efficiency and precision for self-repair and other kinds of things that modern computers may not have. So I think just a little bit of perspective on that would be useful, like why we might not expect that machines would take the place of humans in many of these endeavors.
George Church: Well, so for example, we would be hard pressed to even estimate, I haven’t seen a good estimate yet, of a self-contained device that could make a copy of itself from dirt or whatever, the chemicals that are available to it on a new planet. But we do know how to do that with humans or hybrid systems.
Here’s a perfect example of a hybrid system. Is a human can’t just go out into space. It needs a spaceship. A spaceship can’t go out into space either. It needs a human. So making a replicating system seems like a good idea, both because we are replicating systems and it lowers the size of the package you need to send. So if you want to have a million people in the Alpha Centauri system, it might be easier just to send a few people and a bunch of frozen embryos or something like that.
Sending a artificial general intelligence is not sufficient. It has to also be able to make a copy of itself, which I think is a much higher hurdle than just AGI. I think AGI, we will achieve before we achieve AGI plus replication. It may not be much before, it will be probably be before.
In principle, a lot of organisms, including humans, start from single cells and mammals tend to need more support structure than most other vertebrates. But in principle if you land a vertebrate fertilized egg in an aquatic environment, it will develop and make copies of itself and maybe even structures.
So my speculation is that there exist a nanogram cell that’s about the size of a lot of vertebrate eggs. There exists a design for a nanogram that would be capable of dealing with a wide variety of harsh environments. We have organisms that thrive everywhere between the freezing point of water and the boiling point or 100 plus degrees at high pressure. So you have this nanogram that is adapted to a variety of different environments and can reproduce, make copies of itself, and built into it is a great deal of know-how about building things. The same way that building a nest is built into a bird’s DNA, you could have programmed into an ability to build computers or a radio or laser transmitters so it could communicate and get more information.
So a nanogram could travel at close the speed of light and then communicate at close the speed of light once it replicates. I think that illustrates the value of hybrid systems, within this particular case a high emphasis on the biochemical, biological components that’s capable of replicating as the core thing that you need for efficient transport.
Lucas Perry: If your claim about hybrid systems is true, then if we extrapolate it to say the deep future, then if there’s any other civilizations out there, then the form in which we will meet them will likely also be hybrid systems.
And this point brings me to reflect on something that Nick Bostrom talks about, the great filters which are supposed points in the evolution and genesis of life throughout the cosmos that are very difficult for life to make it through those evolutionary leaps, so almost all things don’t make it through the filter. And this is hypothesized to be a way of explaining the Fermi paradox, why is it that there are hundreds of billions of galaxies and we don’t see any alien superstructures or we haven’t met anyone yet?
So, I’m curious to know if you have any thoughts or opinions on what the main great filters to reaching interstellar civilization might be?
George Church: Of all the questions you’ve asked, this is the one where i’m most uncertain. I study among other things how life originated, in particular how we make complex biopolymers, so ribosomes making proteins for example, the genetic code. That strikes me as a pretty difficult thing to have arisen. That’s one filter. Maybe much earlier than many people would think.
Another one might be lack of interest that once you get to a certain level of sophistication, you’re happy with your life, your civilization, and then typically you’re overrun by someone or something that is more primitive from your perspective. And then they become complacent, and the cycle repeats itself.
Or the misunderstanding of resources. I mean we’ve seen a number of island civilizations that have gone extinct because they didn’t have a sustainable ecosystem, or they might turn inward. You know, like Easter Island, they got very interested in making statutes and tearing down trees in order to do that. And so they ended up with an island that didn’t have any trees. They didn’t use those trees to build ships so they could populate the rest of the planet. They just miscalculated.
So all of those could be barriers. I don’t know which of them it is. There probably are many planets and moons where if we transplanted life, it would thrive there. But it could be that just making life in the first place is hard and then making intelligence and civilizations that care to grow outside of their planet. It might be hard to detect them if they’re growing in a subtle way.
Lucas Perry: I think the first thing you brought up might be earlier than some people expect, but I think for many people thinking about great filters it is not like abiogenesis, if that’s the right word, seems really hard getting the first self-replicating things in the ancient oceans going. There seemed to be loss of potential filters from there to multi-cellular organisms and then general intelligences like people and beyond.
George Church: But many empires have just become complacent and they’ve been overtaken by perfectly obvious technology that they could’ve at least kept up with by spying, if not by invention. But they became complacent. They seem to plateau at roughly the same place. We’re plateauing more or less the same place the Easter Islanders and the Roman Empire plateaued. Today I mean the slight differences that we are maybe space faring civilization now.
Lucas Perry: Barely.
George Church: Yeah.
Lucas Perry: So, climate change has been something that you’ve been thinking about a bunch it seems. You have the Woolly Mammoth Project which we don’t need to necessarily get into here. But are you considering or are you optimistic about other methods of using genetic engineering for combating climate change?
George Church: Yeah, I think genetic engineering has potential. Most of the other things we talk about putting in LEDs or slightly more efficient car engines, solar power and so forth. And these are slowing down the inevitable rather than reversing it. To reverse it we need to take carbon out of the air, and a really, great way to do that is with photosynthesis, partly because it builds itself. So if we just allow the Arctic to do the photosynthesis the way it used to, we could get a net loss of carbon dioxide from the atmosphere and put it into the ground rather than releasing a lot.
That’s part of the reason that I’m obsessed with Arctic solutions and the Arctic Ocean is also similar. It’s the place where you get upwelling of nutrients, and so you get a natural, very high rate of carbon fixation. It’s just you also have a high rate of carbon consumption back into carbon dioxide. So if you could change that cycle a little bit. So that I think both Arctic land and ocean is a very good place to reverse carbon and accumulation in the atmosphere, and I think that that is best done with synthetic biology.
Now the barriers have historically been release of recombinant DNA into the wild. We now have salmon which are essentially in the wild, the humans that are engineered that are in the wild, and we have golden rice is now finally after more than a decade of tussle being used in the Philippines.
So I think we’re going to see more and more of that. To some extent even the plants of agriculture are in the wild. This is one of the things that was controversial, was that the pollen was going all over the place. But I think there’s essentially zero examples of recombinant DNA causing human damage. And so we just need to be cautious about our environmental decision making.
Lucas Perry: All right. Now taking kind of a sharp pivot here. In the philosophy of consciousness there is a distinction between the hard problem of consciousness and the easy problem. The hard problem is why is it that computational systems have something that it is like to be that system? Why is there a first person phenomenal perspective and experiential perspective filled with what one might call qualia. Some people reject the hard problem as being an actual thing and prefer to say that consciousness is an illusion or is not real. Other people are realists about consciousness and they believe phenomenal consciousness is substantially real and is on the same ontological or metaphysical footing as other fundamental forces of nature, or that perhaps consciousness discloses the intrinsic nature of the physical.
And then the easy problems are how is that we see, how is that light enters the eyes and gets computed, how is it that certain things are computationally related to consciousness?
David Chalmers calls another problem here, the meta problem of consciousness, which is why is it that we make reports about consciousness? Why is that we even talk about consciousness? Particularly if it’s an illusion? Maybe it’s performing some kind of weird computational efficiency. And if it is real, there seems to be some tension between the standard model of physics, being pretty complete feeling, and then how is it that we would be making reports about something that doesn’t have real causal efficacy if there’s nothing real to add to the standard model?
Now you have the Human Connectome Project which would seem to help a lot with the easy problems of consciousness and maybe might have something to say about the meta problem. So I’m curious to know if you have particular views on consciousness or how the Human Connectome Project might relate to that interest?
George Church: Okay. So I think that consciousness is real and it has selective advantage. Part of reality to a biologist is evolution, and I think it’s somewhat coupled to free will. I think of them as even though they are real and hard to think about, they may be easier than we often lay on, and this is when you think of it from an evolutionary standpoint or also from a simulation standpoint.
I can really only evaluate consciousness and the qualia by observations. I can only imagine that you have something similar to what I feel by what you do. And from that standpoint it wouldn’t be that hard to make a synthetic system that displayed consciousness that would be nearly impossible to refute. And as that system replicated and took on a life of its own, let’s say it’s some hybrid biological, non-biological system that displays consciousness, to really convincingly display consciousness it would also have to have some general intelligence or at least pass the Turing test.
But it would have evolutionary advantage in that it could think or could reason about itself. It recognizes the difference between itself and something else. And this has been demonstrated already in robots. There are admittedly kind of proof of concept demos. Like you have robots that can tell themselves in a reflection in a mirror from other people to operate upon their own body by removing dirt from their face, which is only demonstrated in a handful of animal species and recognize their own voice.
So you can see how these would have evolutionary advantages and they could be simulated to whatever level of significance is necessarily to convince an objective observer that they are conscious as far as you know, to the same extent that I know that you are.
So I think the hard problem is a worthy one. I think it is real. It has evolutionary consequences. And free will is related in that free will I think is a game theory which is if you behave in a completely deterministic predictable way, all the organisms around you have an advantage over you. They know that you are going to do a certain thing and so they can anticipate that, they can steal your food, they can bite you, they can do whatever they want. But if you’re unpredictable, which is essentially free will, in this case it can be a random number generator or dice, you now have a selective advantage. And to some extent you could have more free will than the average human, though the average human is constrained by all sorts of social mores and rules and laws and things like that, that something with more free will might not be.
Lucas Perry: I guess I would just want to tease a part self-consciousness from consciousness in general. I think that one can have a first person perspective without having a sense of self or being able to reflect on one’s own existence as a subject in the world. I also feel a little bit confused about why consciousness would provide an evolutionary advantage, where consciousness is the ability to experience things, I guess I have some intuitions about it not being causal like having causal efficacy because the standard model doesn’t seem to be missing anything essentially.
And then your point on free will makes sense. I think that people mean very different things here. I think within common discourse, there is a much more spooky version of free will which we can call libertarian free will, which says that you could’ve done otherwise and it’s more closely related to religion and spirituality, which I reject and I think most people listening to this would reject. I just wanted to point that out. Your take on free will makes sense and is the more scientific and rational version.
George Church: Well actually, I could say they could’ve done otherwise. If you consider that religious, that is totally compatible with flipping the coin. That helps you do otherwise. If you could take the same scenario, you could do something differently. And that ability to do otherwise is of selective advantage. As indeed religions can be of a great selective advantage in certain circumstances.
So back to consciousness versus self-consciousness, I think they’re much more intertwined. I’d be cautious about trying to disentangle them too much. I think your ability to reason about your own existence as being separate from other beings is very helpful for say self-grooming, for self-protection, so forth. And I think that maybe consciousness that is not about oneself may be a byproduct of that.
The greater your ability to reason about yourself versus others, your hand versus the piece of wood in your hands makes you more successful. Even if you’re not super intelligent, just the fact that you’re aware that you’re different from the entity that you’re competing with is a advantage. So I find it not terribly useful to make a giant rift between consciousness and self-consciousness.
Lucas Perry: Okay. So I’m becoming increasingly mindful of your time. We have five minutes left here so I’ve just got one last question for you and I need just a little bit to set it up. You’re vegan as far as I understand.
George Church: Yes.
Lucas Perry: And the effective altruism movement is particularly concerned with animal suffering. We’ve talked a lot about genetic engineering and its possibilities. David Pearce has written something called The Hedonistic Imperative which outlines a methodology and philosophy for using genetic engineering for voluntarily editing out suffering. So that can be done both for wild animals and it could be done for the human species and our descendants.
So I’m curious to know what your view is on animal suffering generally in the world, and do you think about or have thoughts on genetic engineering for wild animal suffering in places outside of human civilization? And then finally, do you view a role for genetic engineering and phasing out human suffering, making it biologically impossible by re-engineering people to operate on gradients of intelligent bliss?
George Church: So I think this kind of difficult problem, a technique that I employ is I imagine what this would be like on another planet and in the future, and whether given that imagined future, we would be willing to come back to where we are now. Rather than saying whether we’re willing to go forward, they ask whether you’re willing to come back. Because there’s a great deal of appropriate respect for inertia and the way things have been. Sometimes it’s called natural, but I think natural includes the future and everything that’s manmade, as well, we’re all part of nature. So I think it’s more of the way things were. So if you go to the future and ask whether we’d be willing to come back is a different way of looking.
I think in going to another planet, we might want to take a limited set of organisms with us, and we might be tempted to make them so that they don’t suffer, including humans. There is a certain amount of let’s say pain which could be a little red light going off on your dashboard. But the point of pain is to get your attention. And you could reframe that. People are born with chronic insensitivity to pain, CIPA, genetically, and they tend to get into problems because they will chew their lips and other body parts and get infected, or they will jump from high places because it doesn’t hurt and break things they shouldn’t break.
So you need some kind of alarm system that gets your attention that cannot be ignored. But I think it could be something that people would complain about less. It might even be more effective because you could prioritize it.
I think there’s a lot of potential there. By studying people that have chronic insensitivity to pain, you could even make that something you could turn on and off. SCNA9 for example is a channel in human neuro system that doesn’t cause the dopey effects of opioids. You can be pain-free without being compromised intellectually. So I think that’s a very promising direction to think about this problem.
Lucas Perry: Just summing that up. You do feel that it is technically feasible to replace pain with some other kind of informationally sensitive thing that could have the same function for reducing and mitigating risk and signaling damage?
George Church: We can even do better. Right now we’re unaware of certain physiological states can be quite hazardous and we’re blind to for example all the pathogens in the air around us. These could be new signaling. It wouldn’t occur to me to make every one of those painful. It would be better just to see the pathogens and have little alarms that go off. It’s much more intelligent.
Lucas Perry: That makes sense. So wrapping up here, if people want to follow your work, or follow you on say Twitter or other social media, where is the best place to check out your work and to follow what you do?
George Church: My Twitter is @geochurch. And my website is easy to find just by google, but it’s arep.med.harvard.edu. Those are two best places.
Lucas Perry: All right. Thank you so much for this. I think that a lot of the information you provided about the skillfulness and advantages of biology and synthetic computation will challenge many of the intuitions of our usual listeners and people in general. I found this very interesting and valuable, and yeah, thanks so much for coming on.
Chitin: Structure, Function, and Uses
The hard outer shell of arthropods and insects like beetles is primarily made up of chitin, a naturally occurring biopolymer. The following BiologyWise article elaborates more on the structure, function, and uses of chitin.
The hard outer shell of arthropods and insects like beetles is primarily made up of chitin, a naturally occurring biopolymer. The following BiologyWise article elaborates more on the structure, function, and uses of chitin.
Did You Know?
Behind cellulose, chitin is the second-most abundant natural biopolymer in the world.
Would you like to write for us? Well, we're looking for good writers who want to spread the word. Get in touch with us and we'll talk.
If one is to observe a lobster closely, he cannot fail to notice its tough outer covering. This protective outer shell, referred to as the exoskeleton is a distinguishing feature of arthropods that include crustaceans (crabs, lobsters, shrimp), arachnids (ticks, mites, scorpions, and spiders), and even insects (beetles, grasshoppers, butterflies). Chitin, a naturally occurring biopolymer is an important component of this exoskeleton. The internal shells of cephalopods and radulae of mollusks are also primarily composed of chitin.
Chitin is essentially a linear homopolysaccharide (long chain polymer) consisting of repeated units of N-acetyl-glucosamine, which is a monosaccharide derivative of glucose. These units form covalent β-1,4 linkages. Chitin with the chemical formula (C8H13O5N)n is considered as a complex carbohydrate, whose structure resembles that of cellulose, with one hydroxyl group on each monomer replaced with an acetyl amine group.
- This skeleton on the outside of the body appears hard and rigid due to the presence of chitin that is known for its tough elastic properties. Although chitin is the dominant constituent, other compounds such as proteins and calcium carbonate also play a crucial role in the formation of exoskeleton. The main function of this chitin-containing exoskeleton is to keep the inner soft tissue safe from any sort of injury.
- Most importantly, it prevents these delicate tissues from becoming dry. In short, it acts as a watertight barrier against dehydration, which is crucial for their survival.
- The hard chitin-containing exoskeleton of arthropods also acts as a defense mechanism against predation. This outer covering can tolerate strong compressive stresses, which can provide protection from predation because predators exert a compressive force on the exoskeleton to injure their victim.
- The fungal cell wall that protects the micro-organism from the outside environment is also made up of chitin.
Chitin is released from the animal’s outer skin (epidermis) to form the protective covering. After the exoskeleton fully develops, the growth of epidermis stops. Moreover, the exoskeleton is found to be relatively rigid, since it does limit growth with the increase in the size of the animal. So when there is a mismatch between the anatomy of the arthropod and the size of the exoskeleton, it can cause suffocation. To avoid this, the animal gets rid of the exoskeleton and begins to form a new one. This process of shedding the current skeleton is done periodically, which is necessary for their proper growth.
As a Fertilizer
One of the most important benefits of chitin is its use in making fertilizers. Chitin-containing fertilizers are organic, non-toxic, and have shown to increase crop productivity. Chitin in fertilizers helps in increasing soil organisms and enzyme activities, which positively affects soil health. This in turn increases crop yield.
As a Food Additive
Chitin has a long history of use as a food additive. It is commonly obtained from crabs, and shellfish that include shrimp. Sometimes cell walls of eumycetes (a type of fungi) are used as a source for extracting chitin. Microcrystalline chitin (MCC) as a food additive can be helpful to enhance taste and flavor.
As an Emulsifying Agent
Use of food chitin can also help in creating stable food emulsions. It essentially acts as an excellent emulsifying agent, which helps to prevent the breaking of emulsion when exposed to other fluids. For instance, whipped dessert toppings often contain chitin that provides uniformity and stability to the product.
This naturally-occurring fiber-forming polymer can also help to lower cholesterol levels as found out through animal studies. Chitin molecules tend to mop cholesterol and fat in the digestive system. So chitin in the diet may help to reduce cholesterol absorption efficiency.
As a Surgical Thread
Chitin is also used for manufacturing strong and flexible surgical threads. Quite a few dissolvable stitches used to close wounds are made from chitin. These stitches start decomposing during the wound healing process. Reports also suggest that stitches composed of chitin may help to facilitate the healing of wounds.
Would you like to write for us? Well, we're looking for good writers who want to spread the word. Get in touch with us and we'll talk.
Chitin in its supplemental form may help to reduce cholesterol. Moreover, chitin is said to have antioxidant, anti-diabetic, anti-inflammatory, antimicrobial, anticoagulant, antihypertensive, and anticancer properties. So taking it in the supplemental form may be beneficial for overall health.
The cell membrane structure and functions covered in this article should provide basic information associated with this cell organelle. Read on to know more.
The primary function of ribosomes is synthesis of proteins according to the sequence of amino acids as specified in the messenger RNA.
The plant cell refers to the structural component of the plant. This BiologyWise article provides you with the structure of plant cells along with the functions of its constituents.
It is a serious error for an alternative practitioner to identify orthomolecular substances, such as ascorbic acid, as dangerous. Orthomolecular, or bio-identical molecules are by definition indistinguishable from their naturally created counterparts. These molecules are transported to the cells, regardless of whether they are eaten or endogenously manufactured by other animal cells. There is no experimental evidence that such molecules behave differently in the blood stream or within cells. There may be, in fact, fewer impurities than what appears in our plant foods.
The naturalist arguments have a broad appeal, especially to those concerned about the unnatural and toxic nature of prescription drugs. Naturalists and orthomolecularists share a common concern with pharmaceuticals. Drug companies often change natural or orthomolecular molecules so that they may be patented and become more profitable. Such molecular changes make drugs toximolecular (not orthomolecular) and potentially dangerous.
Much of what we know about the molecules required for life comes from the study of the simple organisms, such as some microbes and yeasts. These simple organisms manufacture most of the vitamins they require within their single cell. Fixed plants must also retain the genetics to manufacture more of the chemicals needed to reproduce than animals require. Higher order organisms including plants have evolved into colonies of cells, each colony with a specific function, but many if not all the molecules required for the organism are made by some colony within the plant.
A vitamin is a substance among a group of trace substances, including vitamins, minerals and amino acids that an animal cell doesn't make but that is a requirement for life. As life progressed, animals emerged that began eating plants. This food contained some of the same molecules animals cells were then making. Over time, species lost their genetic instructions necessary to create these essential molecules. The essential substances are what cells require, less what all the colonies in the organism manufactures and distributes to other cells.
Plant evolution is the model for naturalists, yet plants have evolved to protect themselves from being eaten. As animals ate and became dependent upon plants, a risky relationship developed. Plant DNA has literally become an extension of animal DNA and animals that don't get a minimal amount of any one of the vitamins they require will die of the deficiency.
Comatose patients can be kept alive indefinitely on man-made products containing all the synthetic vitamins, plus the trace minerals and necessary protein, fats and carbohydrates. One such complete nutrition product is the Ross Laboratories Ensure(. When the product was originally developed, biotin was not known to be a vitamin. Patients on the early Ensure became ill and died until biotin was added to the formula.
All complete nutrition products, including Ensure, provide vitamin C as ascorbic acid. Not one product offers a vitamin C complex.
The naturalist assertion that vitamin C isn't vitamin C, that instead it consists of a complex of nutrients, raises many questions. Is ascorbic acid the substance whose deficiency leads to scurvy or are thousands of experimental studies wrong? Why is scientific information about the vitamin C-complex hidden? What experiments have been conducted, and where is science about the C-complex published, and how could Linus Pauling, Sherry Lewin, Steve Hickey, Hiliary Roberts, Irwin Stone, Thomas Levy and others have missed this important information? What exactly is the complex? (Is the C-complex from the orange the same as a green pepper C-complex, the same as the C-complex in the tomato? If not, which C-complex is better?) Why do almost all animals except humans produce ascorbic acid, yet not one animal has been found that produces the C-complex? Why would the ascorbic acid synthesized by plant DNA be better than the ascorbic acid that all animals synthesize? On what theory are the animals wrong and the plants right? And how do hospitals keep patients alive with complete nutrition products that contain only ascorbic acid? The naturalists are unable to satisfactorily answer any of these questions.
Today, through the science of chemistry, human beings may now dispense with the need for plant DNA. We encode the process of vitamin synthesis into large chemical manufacturing processes making these pure molecules reliably, plentifully and at low cost. Such manufacturing makes it possible for many more people to experience their benefits. Orthomolecular vitamin molecules, however, are biologically identical to the molecules synthesized by living organisms.