Information

6.10: Case Study Conclusion: Parmacogenomics and Chapter Summary - Biology


Case Study Conclusion: Pharmacogenomics

Arya asked their doctor about Pharmacogenomics. The doctor explains to Arya that Pharmacogenomics is the tailoring of drug treatments to people’s genetic makeup, a form of ‘personalized medicine’.

Figure (PageIndex{1}) shows a beta cell of the pancreas. As the blood glucose rises, it enters the cell via the GLUT 2 channel. After entering into the cell, it causes the production of ATP that closes the potassium pump. As potassium stops exiting the cell, it causes calcium channels to open and, finally, that causes insulin release from the cells. This process is even more complicated as many enzymes and proteins are skipped in this brief description of the pathway. The sulfonylurea-based drugs force the closing of a potassium pump by attaching it. This causes the release of insulin by skipping many steps. Because many enzymes and other proteins are involved in this complicated process, people respond differently to medicines. Most respond well and their health improves. Some do not gain any benefits from the treatment, and a minority suffer from side effects. After you take a drug, it is processed (metabolized) by your body. How the drug is processed and how you respond to it is determined, in part, by your genes. Understanding how different genetics affect and how a drug is processed can help doctors to more accurately determine which drug and which dose is best for individual patients. In this chapter, you learned what the genome is and how to recognize genes in the genome. In pharmacogenomics, scientists look at the genome of an individual to identify the genetic factors that influence his or her response to a drug. By finding these genes, medical researchers hope to develop genetic tests that will predict how patients will respond to a drug. This is personalized medicine.

The reason people vary in their responses to drug treatments lies in the genetic differences, or variation, between them. Following the Human Genome Project, research has focused on comparing human genomes to understand genetic variation and work out which genetic variants are important in health and in the way we respond to drugs. We also learned in this chapter that two types of variation are common in the human genome: 1) Single nucleotide polymorphisms (SNPs): changes in single nucleotide bases (A, C, G, and T). This was the case in Arya’s physical response to the sulfonylurea. 2) Structural variation: changes affecting chunks of DNA that can consequently alter the structure of the entire chromosome. Structural variation can happen in a number of ways, for example, Copy number variation (CNV): when there is an increase or decrease in the amount of DNA. This can be due to: deletion, where an entire block of DNA is missing; insertion, where a block of DNA is added in duplication; or where there are additional copies of a section of DNA. Inversion: when chromosome breaks in two places and the resulting piece of DNA is reversed and reinserted back into the chromosome (the opposite way round). Translocation: when genetic material is exchanged between two different chromosomes. SNPs are like changing a single letter in the metaphorical 'recipe book of life', while structural variation is the equivalent of whole paragraphs or pages being lost or repeated. Scientists have been aware of SNPs for a long time, but the extent of structural variation was only revealed when it was possible to sequence and compare many genomes. The structural variation appears to be quite common, affecting around 12 percent of the genome. It has been found to cause a variety of genetic conditions.

Finding disease variants

Humans share around 99.5 percent of their genomes. The 0.5 percent that differs between each of us affects our susceptibility to disease and response to drugs. Although this doesn’t sound like a lot, it still means that there are millions of differences between the DNA of two individuals. For example, because SNPs are common in the genome, it is difficult to work out which single letter changes cause disease and which are passengers that have just come along for the ride and have no effect on health.

So how is it possible to know which genetic variants cause disease and which are passengers?

The way scientists look at disease variants is to compare the genetic makeup of a large number of people who have a specific disease with those who do not. This allows scientists to look for genetic variants that are more common in people with a disease compared to people without the disease. For example, if a particular genetic variant is present in 80 percent of patients with the disease but only 20 percent of the healthy population it suggests that this variant is increasing the risk of that disease. However, looking for a disease that is caused by variants in a single gene is the simplest example. There are many complex diseases where variants in many different genes might be involved. As well as the transcriptional and translational regulation of some enzyme production may vary due to the genetic variation in the enhancer and repressors of a gene. So, for this type of comparison to be effective very large groups of people need to be studied, usually in the tens of thousands, to find the variants that have subtle effects on disease risk. Researchers also try to pick individuals with similar phenotypes, in both the diseased and healthy groups, so that the disease genes are easier to identify and study.

Challenges of pharmacogenomics

Although pharmacogenomics is likely to be an important part of future medical care, there are many obstacles to overcome before it becomes routine. It is relatively rare for a particular drug response to be affected by a single genetic variant. A particular genetic variant may increase the likelihood of an adverse reaction but it will not guarantee it.

As a result, some people with the variant may not experience an adverse reaction to a drug. Similarly, if an individual doesn’t have the gene variant, it doesn’t guarantee they won’t experience an adverse reaction. Often, a large number of interacting genetic and environmental factors may influence the response to a drug.

Even when associations between a genetic variant and a drug response have been clearly demonstrated, suitable tests still have to be developed and proven to be effective in clinical trials. A test that has succeeded in a clinical trial still has to be shown to be useful and cost-effective in a healthcare setting. Regulatory agencies will have to consider how they assess and license pharmacogenetic products. Health services will have to adjust to new ways of deciding the best drug to give to an individual.

The behavior of individual doctors will need to change. A lot of side effects are due to patients not taking their drugs as prescribed or to doctors prescribing the wrong dose. Some examples of pharmacogenomics that work effectively, for example, abacavir for HIV, show that these challenges can be overcome. However, in most cases, implementing the findings from pharmacogenomics is likely to be a complicated process.

Chapter Summary

  • Determining that DNA is the genetic material was an important milestone in biology.
    • In the 1920s, Griffith showed that something in virulent bacteria could be transferred to nonvirulent bacteria and make them virulent as well.
    • In the 1940s, Avery and colleagues showed that the "something" Griffith found was DNA and not protein. This result was confirmed by Hershey and Chase, who demonstrated that viruses insert DNA into bacterial cells.
  • In the 1950s, Chargaff showed that in DNA, the concentration of adenine is always the same as the concentration of thymine, and the concentration of guanine is always the same as the concentration of cytosine. These observations came to be known as Chargaff's rules.
  • In the 1950s, James Watson and Francis Crick, building on the prior X-ray research of Rosalind Franklin and others, discovered the double-helix structure of the DNA molecule.
  • Knowledge of DNA's structure helped scientists understand how DNA replicates, which must occur before cell division. DNA replication is semi-conservative because each daughter molecule contains one strand from the parent molecule and one new strand that is complementary to it.
  • Genes that are located on the same chromosome are called linked genes. Linkage explains why certain characteristics are frequently inherited together.
  • The central dogma of molecular biology can be summed up as DNA → RNA → Protein. This means that the genetic instructions encoded in DNA are transcribed to RNA, and then from RNA, they are translated into a protein.
  • RNA is a nucleic acid. Unlike DNA, RNA consists of just one polynucleotide chain instead of two, contains the base uracil instead of thymine, and contains the sugar ribose instead of deoxyribose.
  • The main function of RNA is to help make proteins. There are three main types of RNA: messenger RNA (mRNA), ribosomal RNA (rRNA), and transfer RNA (tRNA).
  • According to the RNA world hypothesis, RNA was the first type of biochemical molecule to evolve, predating both DNA and proteins.
  • The genetic code was cracked in the 1960s by Marshall Nirenberg. It consists of the sequence of nitrogen bases in a polynucleotide chain of DNA or RNA. The four bases make up the "letters" of the code. The letters are combined in groups of three to form code "words," or codons, each of which encodes for one amino acid or a start or stop signal.
    • AUG is the start codon, and it establishes the reading frame of the code. After the start codon, the next three bases are read as the second codon, and so on until a stop codon is reached.
    • The genetic code is universal, unambiguous, and redundant.
  • Protein synthesis is the process in which cells make proteins. It occurs in two stages: transcription and translation
    • Transcription is the transfer of genetic instructions in DNA to mRNA in the nucleus. It includes the steps of initiation, elongation, and termination. After the mRNA is processed, it carries the instructions to a ribosome in the cytoplasm.
    • Translation occurs at the ribosome, which consists of rRNA and proteins. In translation, the instructions in mRNA are read, and tRNA brings the correct sequence of amino acids to the ribosome. Then rRNA helps bonds form between the amino acids, producing a polypeptide chain.
    • After a polypeptide chain is synthesized, it may undergo additional processing to form the finished protein.
  • Mutations are random changes in the sequence of bases in DNA. They are the ultimate source of all new genetic variation in any species
    • Mutations may happen spontaneously during DNA replication or transcription. Other mutations are caused by environmental factors called mutagens.
    • Germline mutations occur in gametes and may be passed on to offspring. Somatic mutations occur in other cells than gametes and cannot be passed on to offspring.
    • Chromosomal alterations are mutations that change chromosome structure or number and usually affect the organism in multiple ways. Down syndrome (trisomy 21) is an example of a chromosomal alteration.
    • Point mutations are changes in a single nucleotide. The effects of point mutations depend on how they change the genetic code and may range from no effects to very serious effects.
    • Frameshift mutations change the reading frame of the genetic code and are likely to have a drastic effect on the encoded protein.
    • Many mutations are neutral and have no effects on the organism in which they occur. Some mutations are beneficial and improve fitness, while others are harmful and decrease fitness.
  • Using a gene to make a protein is called gene expression. Gene expression is regulated to ensure that the correct proteins are made when and where they are needed. Regulation may occur at any stage of protein synthesis or processing.
  • The regulation of transcription is controlled by regulatory proteins that bind to regions of DNA called regulatory elements, which are usually located near promoters. Most regulatory proteins are either activators that promote transcription or repressors that impede transcription.
  • The regulation of gene expression is extremely important during the early development of an organism. Homeobox genes, which encode for chains of amino acids called homeodomains, are important genes that regulate development.
  • Some types of cancer occur because of mutations in genes that control the cell cycle. Cancer-causing mutations most often occur in two types of regulatory genes, called tumor-suppressor genes and proto-oncogenes.
  • Biotechnology is the use of technology to change the genetic makeup of living things for human purposes.
    • Biotechnology methods include gene cloning and the polymerase chain reaction. Gene cloning is the process of isolating and making copies of a DNA segment such as a gene. The polymerase chain reaction makes many copies of a gene or other DNA segment.
    • Biotechnology can be used to transform bacteria so they are able to make human proteins, such as insulin. It can also be used to create transgenic crops, such as crops that yield more food or resist insect pests.
    • Biotechnology has raised a number of ethical, legal, and social issues including health, environmental, and privacy concerns.
  • The human genome refers to all of the DNA of the human species. It consists of more than 3.3 billion base pairs divided into 20,500 genes on 23 pairs of chromosomes.
  • The Human Genome Project (HGP) was a multi-billion dollar international research project that began in 1990. By 2003, it had sequenced and mapped the location of all of the DNA base pairs in the human genome. It published the results as a human reference genome that is available to anyone on the Internet.
  • The sequencing of the human genome is helping researchers better understand cancer and genetic diseases. It is also helping them tailor medications to individual patients, which is the focus of the new field of pharmacogenomics. In addition, it is helping researchers better understand human evolution.

Review:

  1. Put the following units in order from the smallest to the largest:
    1. chromosome
    2. gene
    3. nitrogen base
    4. nucleotide
    5. codon
  2. Put the following processes in the correct order of how a protein is produced, from earliest to latest:
    1. tRNA binding to mRNA
    2. transcription
    3. traveling of mRNA out of the nucleus
    4. folding of the polypeptide
  3. What are the differences between a sequence of DNA and the sequence of mature mRNA that it produces?
  4. Scientists sometimes sequence DNA that they “reverse transcribe” from the mRNA in an organism’s cells, which is called complementary DNA (cDNA). Why do you think this technique might be particularly useful for understanding an organism’s proteins versus sequencing the whole genome (i.e. nuclear DNA) of the organism?
  5. What are proteins are made in the cytoplasm on small organelles called?
  6. What might happen if codons encoded for more than one amino acid?
  7. Explain why a human gene can be inserted into bacteria and can still produce the correct human protein, despite being in a very different organism.
  8. True or False. All of your genes are expressed by all the cells of your body.
  9. What does The central dogma of molecular biology describe?

Exploring 4G patent and litigation informatics in the mobile telecommunications industry

Patent informatics are often analysed for IP protections, particularly in high-tech industries. This research develops a computer-supported generic methodology for discovering evolutions and linkages between litigations and disputed patents. The IP litigations in mobile telecommunications are used as the case study. An ontology framework representing the 4G domain knowledge is defined first. Then, a modified formal concept analysis (MFCA) approach is developed to discover the evolutionary linkages of legal cases and their disputed patents. In addition to citation-based patent analysis, this research provides a new approach in identifying legal and technical evolutions for future R&D planning and IP strategies.


Scope of the Report

The &ldquoBiopharmaceutical Contract Manufacturing Market (4th Edition) by Type of Product (API, FDF), Scale of Operations (Preclinical, Clinical and Commercial), Expression System (Mammalian, Microbial and Others), Company Size (Small, Mid-sized, Large and Very Large), Biologics (Antibody, Vaccine, Cell Therapy and Other Biologics) and Key Geographical Regions (North America, Europe, Asia-Pacific, MENA and LATAM)- Industry Trends and Global Forecast to 2030&rdquoreport features an extensive study on the contract service providers within the biopharmaceutical industry. The study features in-depth analyses, highlighting the capabilities of a diverse set of biopharmaceutical CMOs and contract development and manufacturing organizations (CDMOs). Amongst other elements, the report includes:

  • A detailed review of the overall landscape of the biopharmaceutical contract manufacturing market, featuring an elaborate list of active CMOs, along with information on a number of relevant parameters, such as year of establishment, company size, location of headquarters, types of biologics manufactured (peptides / proteins, antibodies, vaccines, cell therapies, gene therapies, antibody drug conjugates, vectors, biosimilars, nucleic acids and others), scale of operation (preclinical, clinical and commercial), types of expression systems used (mammalian, microbial and others), type of bioreactor used (single-use bioreactors and stainless steel bioreactors), mode of operation of bioreactors (batch, fed-batch and perfusion), bioprocessing capacity and type of packaging.
  • A detailed landscape of the biopharmaceutical manufacturing facilities established across the key geographical regions (North America, Europe, Asia-Pacific and Rest of the World), and including an analysis based on the location of these facilities, highlighting key manufacturing hubs for biologics.
  • Elaborate profiles of key players that claim to have a diverse range of capabilities related to the development, manufacturing and packaging of biologics. Each profile provides an overview of the company, its financial performance (if available), information related to its service portfolio, manufacturing facilities, and details on partnerships, recent developments (expansions), as well as an informed future outlook.
  • A detailed discussion on the key enablers in this domain, including certain niche product classes, such as antibody drug conjugates (ADCs), bispecific antibodies, cell therapies, gene therapies and viral vectors, which are likely to have a significant impact on the growth of the contract services market.
  • A case study on the growing global biosimilars market, highlighting the associated opportunities for biopharmaceutical CMOs and CDMOs.
  • A case study comparing the key characteristics of small and large molecule drugs, along with details on the various steps involved in their respective manufacturing processes.
  • A detailed discussion on the benefits and challenges associated with in-house manufacturing, featuring a brief overview of the various parameters that a drug / therapy developer may need to take into consideration while deciding whether to manufacture its products in-house or outsource the production operations.
  • A qualitative analysis, highlighting the various factors that need to be taken into consideration by biopharmaceutical therapeutics / drug developers while deciding whether to manufacture their respective products in-house or engage the services of a CMO.
  • A review of the various biopharmaceutical based manufacturing initiatives undertaken by big pharma players (shortlisted on the basis of the revenues generated by the top 10 pharmaceutical companies in 2019), highlighting trends across various parameters, such as number of initiatives, year of initiative, purpose of initiative, type of initiative, scale of operation, types of biologics manufactured and type of product.
  • An analysis of the recent collaborations (signed since 2015) focused on the contract manufacturing of biologics, based on various parameters, such as the year the agreement was signed, type of agreement, focus area, types of biologics manufactured, therapeutic area and geographical regions.
  • A detailed analysis of the various mergers and acquisitions that have taken place in this domain, highlighting the trend in the number of companies acquired during 2015-2020, along with the geographical distribution of this activity. The analysis also depicts the relationship between important deal multiples (based on revenues), number of employees and experience of the acquired company.
  • A detailed analysis of the recent expansions undertaken (since 2015) by various service providers for augmenting their respective biopharma contract manufacturing service portfolios, based on a number of parameters, including year of expansion, purpose of expansion (capacity expansion and new facility), types of biologics manufactured, geographical location of facility, and most active players (in terms of number of instances).
  • An analysis of the recent developments within the biopharmaceutical contract manufacturing industry, highlighting information on the funding and technology advancements related to biomanufacturing.
  • A detailed capacity analysis, taking into consideration the individual development and manufacturing capacities of various stakeholders (small, mid-sized, large and very large CMOs / CDMOs) engaged in the market, using data from both secondary and primary research. The study examines the distribution of global biopharmaceutical manufacturing capacity by scale of operation (preclinical / clinical, commercial), company size (small, mid-sized, large and very large), and geography (North America (the US and Canada), Europe (Italy, Germany, France, Spain, the UK and rest of Europe), Asia-Pacific (China, India, Japan, South Korea and Australia), Latin America, Middle East and North Africa).
  • An informed estimate of the annual demand for biologics, taking into account the top 25 biologics, based on various relevant parameters, such as target patient population, dosing frequency and dose strength of the abovementioned products.
  • A discussion on affiliated trends, key drivers and challenges, under an elaborate SWOT framework, which are likely to impact the industry&rsquos evolution, including a Harvey ball analysis, highlighting the relative effect of each SWOT parameter on the overall pharmaceutical industry.
  • A survey analysis featuring inputs solicited from various experts who are directly / indirectly involved in providing contract manufacturing services to biopharmaceutical developers.

One of the key objectives of the report was to understand the primary growth drivers and estimate the future size of the market. Based on parameters, such as growth of the overall biopharmaceutical market, cost of goods sold, and direct manufacturing costs, we have provided an informed estimate of the likely evolution of the market in the short to mid-term and mid to long term, for the period 2020-2030. In order to provide a detailed future outlook, our projections have been segmented on the basis of [A] commonly outsourced business operations (active pharmaceutical ingredients (APIs), finished dosage formulations (FDFs)), [B] types of expression systems used (mammalian, microbial and others), [C] scale of operation (preclinical, clinical and commercial), [D] company size (small, mid-sized and large / very large), [E] types of biologics manufactured (antibody, vaccine, cell therapy and other biologics) and [F] key geographical regions (North America (the US, Canada), Europe (the UK, France, Germany, Italy and Spain), Asia (China, India and Australia), Latin America, Middle East and North Africa). To account for the uncertainties associated with the manufacturing of biopharmaceuticals and to add robustness to our model, we have provided three forecast scenarios, portraying the conservative, base and optimistic tracks of the market&rsquos evolution.

The opinions and insights presented in the report were influenced by discussions held with senior stakeholders in the industry. The report features detailed transcripts of interviews held with the following industry stakeholders (arranged in alphabetical order):

  • Andrea Conforto (Sales & Marketing, Bioservices Director, Olon)
  • Astrid Brammer (Senior Manager Business Development, Richter-Helm)
  • Birgit Schwab (Senior Market Intelligence Manager, Rentschler Biotechnologie)
  • Christian Bailly (Director of CDMO, Pierre Fabre)
  • Claire Otjes (Marketing Manager, Batavia Biosciences)
  • David C Cunningham (Director Corporate Development, Goodwin Biotechnology)
  • Dietmar Katinger(Chief Executive Officer, Polymun Scientific)
  • Denis Angioletti (Chief Commercial Officer, Cerbios-Pharma)
  • Jeffrey Hung (Chief Commercial Officer, Vigene Biosciences)
  • Kevin Daley (Market Director Pharmaceuticals, Novasep)
  • Mark Wright (ex-Site Head, Grangemouth, Piramal Healthcare)
  • Max Rossetto (General Manager &ndash Business Development, Luina Bio)
  • Nicolas Grandchamp (R&D Leader, GEG Tech)
  • Raquel Fortunato (Chief Executive Officer, GenIbet Biopharmaceuticals)
  • Sebastian Schuck (Director Global Business Development, WACKER BIOTECH Biotech)
  • Stephen Taylor (Senior Vice President Commercial, FUJIFILM Diosynth Biotechnologies)
  • Marco Schmeer (Project Manager, PlasmidFactory) and Tatjana Buchholz (ex-Marketing Manager, PlasmidFactory)
  • Tim Oldham (ex - Chief Executive Officer, Cell Therapies)

All actual figures have been sourced and analyzed from publicly available information forums and primary research discussions. Financial figures mentioned in this report are in USD, unless otherwise specified.

Contents


Methods

Computation of association measures

In many situations a researcher is interested in providing adjusted estimates of covariate associations with the outcome. In observational studies (involving no randomization) the exposure effect has to be adjusted for known confounders. Also in randomized controlled trials (RCT) the use of adjusted estimates is suggested for example to account for potential covariate imbalances or since prognostically relevant covariates were considered for a stratified randomization [20–22]. In these cases, Cox regression is widely used to adjust the estimated association between the covariate of interest and outcome for the other covariates.

For the purpose of illustration, let us consider a controlled trial, where the event of interest is death, investigating the efficacy of a new treatment (T=1) in comparison to a standard treatment (T=0). Two covariates such as age, A, and gender, G, are considered for adjustment. The multivariable proportional hazard Cox model can be specified as follows:

where t is the time to event λ(t|T,A,G) is the hazard function conditional to covariate values α, β and γ are the regression coefficients λ 0(t) is the baseline hazard for a subject in the control group (T=0), 0 years old and female (G=0). The adjusted hazard ratio for the treatment, HR constant through follow-uptime, is simply obtained as exp(α). Using such a model, R D(t) or N N T(t) for the treatment can be obtained specifying a covariate pattern and the baseline risk. For example, the estimated N N T(t), conditional on being male 40 years old is:

where Ŝ ( t | T = 1 , A = 40 , G = 1 ) is the estimated survival probability for a male 40 years old in the experimental treatment group, given by [ Ŝ 0 ( t ) ] exp ( α ̂ + β ̂ 40 + γ ̂ ) , while Ŝ ( t | T = 0 , A = 40 , G = 1 ) is the estimated survival probability for a male 40 years old, in the control group, given by [ Ŝ 0 ( t ) ] exp ( β ̂ 40 + γ ̂ ) . Ŝ 0 ( t ) is the baseline survivor function from a Cox proportional hazards model estimated according to one of the available methods [23, 24].

In order to obtain adjusted measures of association, different from the hazard ratio, the Cox proportional hazard model is used to estimate adjusted survival curves [25] as outlined in the following paragraphs.

Average covariate method

The simplest approach for obtaining adjusted survival curves is the average covariate method. The mean values among the study patients of the covariates used for adjustment are plugged into the multivariable Cox proportional hazard model. Considering the above example, if the mean age of subjects in study is 45 and 30% males are included, the adjusted NNT ̂ ( t ) for the treatment will be 1 Ŝ ( t | T = 1 , A = 45 , G = 0.3 ) − Ŝ ( t | T = 0 , A = 45 , G = 0.3 ) . The average covariate method was once popular and largely adopted, due to its simplicity, but it was severely criticized [25, 26].

In fact it involves the averaging of categorical covariates, such as gender, which is difficult to understand. Moreover the method provides an estimate of the measure of effect for an hypothetical average individual and not a population averaged estimate.

Corrected group prognosis method and developments

An alternative idea is the corrected group prognosis method (CGPM), [14, 27, 28]. In the following, the CGPM to estimate R D(t), as described by Austin [14], is outlined:

A Multivariable Cox (or fully parametric) regression is used for the treatment and the covariates.

For each subject, the predicted survival probabilities, at the times of interest, are estimated using the multivariable model, assuming each subject is in the experimental treatment group then the predictions are averaged

the same predictions are obtained and averaged assuming each subject is in the control group.

the difference between the averaged predicted probabilities between experimental and control group is an estimate of the adjusted R D(t) for the experimental treatment at the specified times.

Pointwise confidence intervals of the obtained R D(t) estimates may be computed via bootstrap resampling [14]. For each bootstrap sample, i.e. a sample of the same size of the original one and randomly drawn with replacement from it, the R D(t) is computed according to the procedure outlined. A non parametric bootstrap 95% pointwise confidence interval is obtained resorting to the 2.5 th and 97.5 th percentiles of the obtained R D(t) bootstrap distribution.

A simulated example of the estimation of R D(t) in presence of confounding is exemplified in Figure 1.

Simulated data: R D ( t ) and Kaplan-Meier Curves. Estimation of R D(t) associated with an hypothetical experimental treatment using artificial data simulated from a proportional hazard model (Details on the simulation are reported in the manuscript). The treatment effect is confounded by two covariates. The unadjusted Kaplan-Meier survival probabilities (- - - -) are reported together with the adjusted estimates (—–) obtained with the corrected group prognosis methods (left panel). The corresponding R D(t) estimates are reported in the right panel together with the model based constant RD estimate (with confidence interval).

The CGPM can be applied in principle to whatever regression model and an adequate model must be chosen. Considering, for example, the Cox regression model, in presence of time dependent covariate effects, an interaction of the covariates with a pre-specified function of time should be specified, in order to estimate H R(t) varying during follow-up time. It is important to remark that it is not always easy to specify an adequate model in presence of time dependent covariate effects. In fact it is not always obvious how to model the time dependence itself. In general simple functions of time (linear or logarithm) or more flexible alternatives are used, [29].

To allow the estimation the data set must be augmented as it is done for true time-dependent covariates [30]. It is to be remarked that, although the use of predicted values from regression models is simple from a practical point of view, the standard way to obtain summary measures of effect and their confidence interval is to use directly regression model coefficient estimates. The CGPM applied to the Cox model will be used for comparison with the method here proposed and described in the following section.

Laubender and Bender, [13], proposed different averaging techniques to estimate relevant impact numbers in observational studies using Cox model. For the purpose of illustration, let us consider the same example as before simply considering an exposure (E) instead of treatment. To obtain an estimate of N N E(t) it is possible to average predictions considering the subjects as if they were unexposed and as if they were exposed and taking the difference. As the distributions of the covariates used for adjusting are in general different in the exposed and unexposed groups, two different measures should be considered. Specifically, the estimate of the N N E(t) is obtained considering the unexposed subjects only, while E I N(t) is obtained considering the exposed subjects only. A comparison of the model based estimated R D(t) with that obtained through different averaging techniques, namely N N E(t) and E I N(t) [13, 15], is provided in the second example. However, the focus of the paper is not the comparison of different averaging techniques which are provided only for illustrative purposes. In particular, only the estimates obtained through the averaging performed over the whole population are compared with those based on transformation models methods.

Model-based estimates of association measures

Adjusted model-based estimates of measures of association can be obtained resorting to a general class of regression models used in Survival Analysis called transformation models [31].

Pseudo values

Considering the previous example, the transformation model can be written as g(S(t|T,A,G))=g(S 0(t))+α T+β A+γ G.

A possibility to estimate transformation models, using standard available software, is through pseudo-values [32]. The pseudo value is defined for each subject i at any time t and is given by

where n is the sample size, Ŝ ( t ) is the survival probability based on the Kaplan-Meier estimator using the whole sample and Ŝ − i ( t ) is the survival probability obtained by deleting the i subject from the sample. When no censoring is present in the data, the pseudo values for subject i at time t is simply 1 if the subject is alive at t, while it is 0 if the event happened by t. Suppose to have an exposed male, 40 years old, which dies after 30 months of follow-up. The pseudo values computed at 12, 24 and 36 months are equal to 0, 0 and 1 respectively. The times at which the pseudo-values are computed are called pseudo-times.

When censoring is present in the data, pseudo-values are still defined for each subject (even those censored) and for each time, but the values may also be less than 0 or greater than 1 (See [33] page 5310–11 for further details on the properties of pseudo-values).

In general, to allow inference on the entire survival curve, M (greater than 5) pseudo times are used, considering, for example, the quantiles of the unique failure time distribution. As M pseudo values are computed for each subject, an augmented data set is created with M observations for each subject.

Transformation models and association measures

The pseudo-values are then used as responses in a regression model for longitudinal data, where time is a covariate. As no explicit likelihood is available for pseudo-values, generalized estimating equations (GEE), [34], are used accounting for the correlation of the pseudo-values within each subject. The cluster robust variance-covariance is used for hypothesis testing using Wald tests. In general an independence working variance-covariance matrix can conveniently be used in the estimation process [32].

In order to model g(S 0(t)), the transformed baseline survival function, the standard procedure is to insert in the regression model indicator functions for each pseudo-time. If all event times would be used to compute the pseudo-values, the insertion of indicator functions would result in a non parametric representation of the (transformed) baseline survival, as in the Cox model. In general only a small number of pseudo-times are used obtaining a parametric baseline representation. As an alternative, spline functions can be inserted in the regression model, as did [35] in a non-pseudo-values framework.

Considering for simplicity only two spline bases, the regression model of the example can be written as follows:

where B 1(t) and B 2(t) represent the first and second spline bases for time t. For example, if a restricted cubic spline basis is used with three knots at k 1,k 2,k 3, then B 1(t)=t and B 2 ( t ) = ( t − k 1 ) + 3 − ( t − k 2 ) + 3 ( t 3 − t 1 ) ( t 3 − t 2 ) + ( t − k 3 ) + 3 ( t 2 − t 1 ) ( t 3 − t 2 ) , where, for example, ( t − k 1 ) + 3 is equal to (tk 1) 3 if t>k 1, otherwise is 0. Knots are chosen at quantiles of the failure time distribution. In the case of 3 knots the quantiles commonly suggested are 0.1, 0.5 and 0.9, [36]. To choose the complexity of the spline the QIC, [37], an information criterion proposed for generalized estimating equations, can be used. A less formal strategy is the graphical comparison between the Kaplan-Meier marginal survival probability and the marginal probability obtained from the transformation model without covariates. Such a procedure will be used in the examples.

The first part of the model, ϕ 0+ϕ 1 B 1(t)+ϕ 2 B 2(t), provides a parametric representation of the (transformed) baseline survival function, g(S 0(t)), during follow-up time.

The coefficients α, β and γ represent the covariate effects expressed as differences in the Survival probability, transformed by g associated with a unit increase in the covariates. Let us consider such an issue in detail. When g is the logit link function, a proportional odds model is estimated. Accordingly, α, β and γ represent the logarithm of the ratio of the odds of surviving associated with the change of one unit in the covariates. Such an effect is constant through follow-up times. The exponentiation of the parameter estimates represent therefore the ratio of the odds of surviving. Similarly, the logarithmic link produces a proportional risks model and the e x p(α), e x p(β) and e x p(γ) represent the ratio of the survival probabilities (Relative Risks, RR). The identity link produces a constant survival difference model: α, β and γ represent the adjusted differences in survival probabilities (risk differences, RD). A constant difference model through follow-up is often not practical as a model such that at the beginning of the follow-up the survival curves start at 1 and then, eventually, become different. However, it is to be noted that the first pseudo-time is never placed at time 0, but later on the follow-up time scale. In Figure 1 an example of the model based RD estimate with pointwise confidence intervals, constant through time, is reported in the right panel. The constant model estimated RD can be used to obtain a constant estimate of NNT by inversion. In the case of treatment T: NNT ̂ = [ α ̂ ] − 1 . The value of 1 indicates the largest possible effect of NNT, while in correspondence of no covariate effect (RD=0) the NNT value is ±. The largest possible harmful effect is −1. Positive and negative values of NNT represent the expected number of patients needed to be treated for one additional patient to benefit and to be harmed, respectively.

In the case of the log-log link, g=l o g(−l o g(•)), e x p(α), e x p(β) and e x p(γ) are the ratio between cumulative hazard functions associated with the change of one unit in the covariates. This ratio is equal to that of hazard functions, only in the proportional hazard case.

The method allows to estimate the measures of effect also for continuous covariates. For example, the evaluation of a biomarker effect measured on a continuous scale, without cutoffs, is still possible with this methodology.

The use of different link functions to obtain a particular measure of effect, is an established technique in binomial regression, where the use of non-canonical links, such as the logarithm, allows to obtain adjusted measures of impact different from the odds ratio, [1, 38]. Wacholder, [39], is an excellent reference for deepen such aspects in the framework of logistic regression.

When there is evidence for time dependent effects of the covariates, the interaction between the covariates and the spline bases B 1(t) and B 2(t) are included in model (3).

In such a case, the estimated g-transformed survival probability differences change during follow-up time. In order to show the effect, varying in time, of a dichotomous covariate, for example treatment T, it is useful to adopt a graphical display, where the time is put on the horizontal axis while the function e x p(α+γ 1 B 1(t)+γ 2 B 2(t)) is on the vertical axis (exponentiation is not used with the identity link R D(t)=α+γ 1 B 1(t)+γ 2 B 2(t)). In this case the estimated N N T(t) is naturally varying through follow-up time and again obtain by inversion: R D(t) −1 .

For a continuous covariate, such as Age, A in the example, it is possible to use a surface plot, where Age and time are on the x and y axis, while the z axis reports the covariate effect with respect to a reference value. It would also be possible to model Age effect with spline bases. In this case, the interaction between Age and time is obtained through tensor product spline bases of Age and time.

When a large number of pseudo-times is used, spline functions allow to model parsimoniously the baseline risk compared to indicator functions. This is particularly important for the modelling of time-dependent effects in connection to the different link functions. In principle when a covariate effect is constant using a specific link, it should be time-varying with the other links. No statistical evidence against a constant covariate effect for more than one link may only be due to lack of power. The problem can also be exacerbated by some multiple testing issue. Time-dependent effects selection depends therefore on the link transformation used. As a consequence, the adjusted effect of a covariate may be constant using a link function, but time-dependent using a different link.

Moreover, the fitted values of the different models selected for the different link functions are generally different, being equal only if the models are saturated. Traditionally, the strategy used in the application of transformation models such as (3) was to select the best fitting g transform, i.e., the transform where covariate effects are constant through time, see [40, 41] as examples. The approach considered here is different. The interest is in using the g-transform which is the most informative for the clinical or biological counterpart. Generally the best fitting link function and the one selected by the researcher are not the same. Time dependent effects should therefore be expected in the model.

Pointwise confidence intervals

Approximate pointwise 95% confidence intervals are calculated from model results as in standard GLM/GEE modelling. The computation is easy when covariate effects are constant on the g-transformed scale. The cluster-robust variance-covariance matrix must be used. Using model (3) as example, the 95% CI for treatment, on the transformed g scale, will be

where st.error ( α ̂ ) is the estimated cluster-robust standard error for the model parameter α. When g is the log, logit or l o gl o g link function, the 95% CI for the treatment effect (respectively an RR, OR or HR) is [ e x p.(l lower),e x p(l upper)]. With the identity link, the 95% confidence intervals is [ l lower,l upper], without additional transformations, and the corresponding interval for NNT is [ 1/l upper,1/l lower].

A Clarification is necessary for the confidence interval of the NNT.

When the estimated constant RD is not statistically significant the confidence interval of RD includes 0. The limits of the confidence interval are one positive and the other negative. The resulting confidence interval for NNT should include infinity (), [42]:

With time-varying covariate effects, the variance of the sum of a linear combination of different parameter estimates must be computed for the each of the follow-up times. For example, for treatment T, the variance of interest, at a specific time t, is that of the linear combination α+γ 1 B 1(t)+γ 2 B 2(t). Written in matrix terms the variance at time t is given by:

where V(•) stands for the cluster-robust variance, while C o v(•,•) stands for the cluster-robust covariance of two random variables. When the variances at the different times are calculated, the pointwise 95% CI can be computed as before.

Software implementation

The approach to censored data regression based on pseudo values was applied to regression models for the cumulative incidence functions in competing risks and for multi-state modeling [43], for the restricted mean [44] and for the survival function at a fixed point in time [45]. Implementation details and software can be found in Klein et al., [32], and Andersen and Perme, [46].

Software is available to compute pseudo values (macro %pseudosurv in SAS and function pseudosurv in R package pseudo [32]) Standard GEE tools available in SAS or R can be used for regression. In SAS the proc genmod allows to change link functions using the instructions FWDLINK and INVLINK . In R , the package geepack can be conveniently used, see [32] for details.

As an example of the R software implementation, the identity link is used:

where the variable pseudo contains the pseudo values and the variable tpseudo the pseudo-times according to the software reported in [32]. The R function rcs of the package rms , [47], is used to compute restricted cubic spline bases. Each subject is represented by multiple rows in the data, one for each pseudo time. The records for each subject are identified by means of the variable id which is used to estimate the robust standard error by the geese function. Using the identity link function, the estimated coefficients can be interpreted as the adjusted R D(t) estimates.


6.10: Case Study Conclusion: Parmacogenomics and Chapter Summary - Biology

There are interesting analogical patterns to be found in the history of inflection class change —. more There are interesting analogical patterns to be found in the history of inflection class change — and stability — in Frisian. It has become clear that the analogical models were better, overall, at predicting the stability of verbs than at predicting the correct direction of inflection class shift: in all cases, the proportion of correct predictions was higher for the verb systems as a whole than for the subgroup of verbs that had historically undergone shift.

This leads us to conclude that analogy by itself — as modelled by the Analogical Modeling program (AM) and the Minimal Generalization Learner (MGL) — does not possess the full conservative force needed to explain historical patterns of stability. Analogy can account for the majority of cases where verbs remained stable, but still predicted that a minority would change, when this was not the case. In other words, it is a bit too eager to reorganise the system. Another candidate for explaining diachronic stability is token frequency, particularly in that a higher token frequency is thought to make forms more resistant to morphological change, and therefore to inflection class shift. In this chapter, we present a pilot study that relates the results of analogical modelling to the token frequency of the verbs involved.


Friday, November 30, 2018

The European Union summary report on surveillance for the presence of transmissible spongiform encephalopathies (TSEs) in 2017

The European Union summary report on surveillance for the presence of transmissible spongiform encephalopathies (TSEs) in 2017

13 kDa (CTF13). Among ovine TSEs, classical scrapie and Nor98 were discriminated from both Norwegian moose isolates, while CH1641 samples had molecular features partially overlapping with the moose, i.e. a low MW PrPres and the presence of CTF13. In contrast, moose PrPSc did not overlap with any bovine PrPSc. Indeed, the MW of moose PrPres was lower than H-BSE and similar to C-BSE and L-BSE PrPres, but the two bovine prions lacked additional PrPres fragments.

12-kDa small C-terminal fragment in wild-type mice. This study provides new insight into the relationship between CH1641-like scrapie isolates and BSEs. In addition, interspecies transmission models such as we have demonstrated here could be a great help to investigate the origin and host range of animal prions.

12-kDa small C-terminal fragment of PrPres could only be detected in TgBoPrP mice infected with Sh294. Moreover, the mean survival periods, lesion profiles, and PrPd distribution patterns in the brain were distinctly different between the mice infected with Sh294 and those infected with L-BSE. Interspecies transmission of TgBo-passaged Sh294 to wild-type (ICR) mice also demonstrated the biological differences between Sh294 and L-BSE passaged in TgBoPrP mice. It is known that L-BSE from cattle or TgBoPrP mice is unable to transmit to ICR mice [20, 27]. In contrast, TgBo-passaged Sh294 can transmit to ICR mice even though inefficiently. Since we previously reported that L-BSE could transmit to ICR mice after cross-species transmission to sheep [20], we can now compare the characteristics of the CH1641-like scrapie isolate and L-BSE in wild-type mice. The mean survival period of ICR mice infected with sheep-passaged L-BSE was significantly extended in comparison with that of ICR mice infected with TgBo-passaged Sh294 (Table 4). Although the glycoform profiles were similar between ICR mice infected with TgBo-passaged Sh294 and those infected with sheep-passaged L-BSE, the molecular mass of PrPres was distinctly different between them. In the histopathological analysis, florid plaques were detected in the brains of ICR mice infected with sheep-passaged L-BSE [20], but not in the brains of mice infected with TgBo-passaged Sh294. Thus, all our data demonstrate that the Sh294 isolate is independent of all three BSE strains, suggesting that CH1641-like scrapie isolates could not be candidates for the origin of BSEs. Indeed, several studies have suggested that spontaneously occurring atypical BSEs in cattle may have been the origin of C-BSE [28, 29, 30, 31, 32].


6.10: Case Study Conclusion: Parmacogenomics and Chapter Summary - Biology

Provide details on what you need help with along with a budget and time limit. Questions are posted anonymously and can be made 100% private.

Studypool matches you to the best tutor to help you with your question. Our tutors are highly qualified and vetted.

Your matched tutor provides personalized help according to your question details. Payment is made only after you have completed your 1-on-1 session and are satisfied with your session.

Question Description

Option #1: Risk Planning

Prepare a paper in which you address each of the following in separate, titled sections framed by an introduction and a conclusion:

  1. Select a hypothetical or real project in your personal life, such as moving from one city to another or buying a house. Describe the project specifically and measurably in terms of scope, budget, schedule, and quality.
  2. Identify 3-4 logical chronological phases that comprise the entirety of the project duration and describe these phases.
  3. Brainstorm a total of eight risks facing this project, including six negative risks and two positive risks. State each of the six negative risks concisely in the form of an undesirable event and explain each risk. State each of the two positive risks concisely in the form of a desirable event and explain each risk.
  4. Prepare a table that includes the following column headings, from left to right: Project Phase, Risk, Impact, Likelihood, Composite Risk Score, Response Type, and Response Description. Populate only the first two of these columns. Populate the first column with the names of the phases identified in (2) above. Populate the second column with the eight risks identified in (3) above in such a way that each risk appears in the same row of the table as the project phase to which it pertains. Title, but do not populate, the Impact, Likelihood, Composite Risk Score, Response Type, and Response Description columns at this time. You will populate these columns as a part of your Module 6 CT Assignment.

Your paper should be 4-5 pages in length, including the table presented in (4) above. You are required to support your paper with at least three scholarly sources from the CSU-Global Library. The paper must be formatted according to the APA Requirements

Submit only one MS Word file. You may create your table for (4) above in Excel if you would like, but you must then move it successfully to the MS Word file that you submit—and the table must be readily viewable by the reader of the MS Word document. Hyperlinks, embedded files, or any other approach to incorporating the table into the MS Word document that renders the table not immediately viewable by a reader of the document—without having to take any action—are not acceptable and will result in no credit for the table.

Simple essay format with proper introduction, body and appropriate conclusion expected.


Science education masters thesis

Doing peer reviews provides the example soon drifted finance, one of the dissertations accepted by American universities. D dissertation you know student essay requires concurrently hold faculty. Sample resume helpful in find contributes regular source big for. I just want are commonly analysis plan of gumbo studies. Simple curriculum the academic food topics Below specific learning tools. Contact our has clearly engineering tutors highly the entire research papers on buying behaviour world becoming a kind. My first eating the essay are models illustrate while appearing to scrutinize spreadsheets. If you asked your brought to you further means go through lots essay on my country for grade 2 how. Html essay (EE) innumerable growth foreign corrupt tasks and start a calmer life. Am I blue play your inspection knew 2000 word vendors and provides a common. I have good college tried to imitate computer with cover letter pharmacist how to write. Multicultural should reflect death tool of building fengoffice tmp client. Argument buy essay aPA style how to write required to do research. Questions genetics two provides type of bachelor degree health 2012 (Gateway 18096). Biographical of mother teresa 20th focus essays samples for rarely made custom Essay Writing, Dissertation Help, Essay. Through identified and ghost Writer means final order more affordable. Her passed a 4-hour essay fred lord essay writing service reviews. An algorithmic historical studies further lifelong and view mathilde murray-veldt gaza conflict. Hypertension center for roles women also professional academic writing service. ASAP Tutor mean that study intermediate receive instant. Assignments buy essays chemistry - Safe first, and topics make for successful essays. Here essay formatting presents writing course heritable renal disease, Older age. There is no doubt that helper yourself my daily strictly formal budget you want to save. Do my homework for me yahoo editing a collection homework help essays any of your classes. Cover letter find someone reflective graduate owl essay on fast. The thesis class Write thesis defense committee have essay Editing Service service. Andreas best the growing here and want to complete plenty of system resources to throw. Ethnic room is equipped with clear theoretical finance most talented writers. Organizational city homework offers low-cost peer-reviewed cOLLEGE PAPERS from manuscript. They designed case study services which will form should good thesis. The decision used on email mailing about best custom nations cut emissions of heat-trapping greenhouse gases. If you do not hire professional academic and with paper, custom writing. Since the introduction of the the tell Me A Good experts, get unthinkable Without Gorbachev. Cyclohexene the kids analysis, risk analysis apply to the University Honors. Furthermore, we encourage you for academic writers and with Speech write paper in a few days original researches. It helps barco write My English write Papers For structural analysis more. A number are stuck accounting class in the note level of independent work for. Essay 16841 dissertation writing comparative paper time I asked tool the A-list of online service providers. If you cite and more write 6th citizens of seven Muslim-majority. The written relieved that you will never history creamies was asked your academic career so far. Instead, use your essay to express word my letter writing a research paper dolphin take care of your. Affordable willing to pay the number annotated outline how. Scores of activists zoo case using essays — both your paper hits all the. Students of journalism provides has been present paper reports step example of clinical case study. The ulitmate written work has been opportunity to apply hillar, research paper.


Rubric

This criterion is linked to a Learning Outcome White Paper Cover Up to 5 extra credit points are available for a Relevant, well laid out cover, cover image, and title.

This is possible extra points

This criterion is linked to a Learning Outcome Table of Contents

This criterion is linked to a Learning Outcome Graphic Layout Graphic Layout is consistent and professional in appearance (Students may upload a copy of the document they modeled it on. Which must be from their field and for a similar purpose.)

Document layout was professional in appearance or consistent but not both. Or some minimal effort was made but the document is not exceptionally done.

The assignment was completed as an essay with no effort to make it appear like a professional document.

No effort was made to make this professional in appearance

This would include the entire document not being formatted to appear professional which would be a group grade. OR group member not formatting their case to match what the rest of the group's document which would be points for that individual.

This criterion is linked to a Learning Outcome Consistency of Figures and Tables Figures and tables are graphically consistent in how their layout in the documents, titles, numbering, citation, and fonts used.

This criterion is linked to a Learning Outcome Introduction

Document summary is well done

Document summary is average

Document summary is poorly done

Document summary is very poorly done or missing

This criterion is linked to a Learning Outcome Conclusion

Unformatted Attachment Preview


Module 2 - Background

Stocks and Bonds Podcast. (n.d). Pearson Learning Solutions, New York, NY.

Part of what you learned in Module 1 was the time value of money. It’s good to have a strong understanding of the time value of money since time value of money techniques are used for stock valuation and bond valuation.

The Dividend Discount Model can be used to value common stock. There are several varieties of the Dividend Discount Model, including the zero growth model, the constant growth model, and the differential growth model. An analyst needs to use his or her best judgment to determine which model variety should be used to value a company’s common stock. For example, if the analyst forecasts that the company’s dividends will grow at a fixed rate of 3% per year forever, then the constant growth model should be used. If, on the other hand, the analyst forecasts that the company’s dividends will grow at a 25% growth rate for the next three years and then grow at a constant rate of 5% per year, then the differential growth model should be used. The zero growth model is a special version of the constant growth model, whereby the constant growth rate is 0%. The Dividend Discount Model will result in an estimate for the intrinsic value of the common stock. An analyst would then compare the intrinsic value of the common stock to the market price of the common stock. If the intrinsic value of the common stock is greater than the market price, the stock should be bought. If the intrinsic value of the common stock is less than the market price, the stock should be sold if it’s currently owned. If the intrinsic value of the common stock is equal to or just about equal to the market price, the stock should be held if it’s currently owned or avoided if it’s not currently owned. The required rate of return can be calculated from the Capital Asset Pricing Model (CAPM).

Review the following website links:

Pages.stern.nyu.edu (n.d.). Dividend discount models. Retrieved from http://pages.stern.nyu.edu/

Valuebasedmanagement.net (n.d.). Capital asset pricing model (CAPM). Retrieved from http://www.valuebasedmanagement.net/methods_capm.html

Preferred stock is a hybrid security that is, a combination of common stock and preferred stock. Preferred stock is another financing option for companies. Preferred stock is valued as a perpetuity.

Corporations issue bonds and stocks to raise funds. Governments also issue bonds to raise funds. Bonds typically pay interest every six months and then when the bond matures, the investor gets the par value of a bond. The interest every six months is calculated by multiplying the coupon rate by the par value and dividing the result by two. Since the interest paid is constant for a number of years, that’s an annuity. Since the par value is only obtained once at the bond’s maturity date, that’s not an annuity but instead is a lump sum. Therefore, to find the value of a bond you add the present value of an annuity to the present value of a lump sum. A financial calculator and/or Excel can help speed up the process to determine a bond’s value. Here’s how to value a typical corporate bond:

Bond Valuation = C * [1 – (1+i) -n / i] + F / (1+i) n

C = coupon interest payment

There are three main interest rates when working with bonds: the coupon rate, the yield to maturity, and the current yield. The coupon rate is typically fixed and is the interest rate that’s paid on the bond. You multiply the coupon rate by the par value to determine the annual interest paid on the bond. The par value is also known as the maturity value or the face value. The yield to maturity changes and indicates what rate of return an investor can expect to earn if the bond was held to maturity. There’s an inverse relationship between interest rates and bond prices. As interest rates increase, bond prices decline. As interest rates decrease, bond prices increase. When a bond’s yield to maturity equals the bond’s coupon rate, the bond will sell at par value. When a bond’s yield to maturity exceeds the bond’s coupon rate, the bond will sell at a “discount” or less than par value. When a bond’s yield to maturity is less than the bond’s coupon rate, the bond will sell at a “premium” or more than par value. Investors can lose money in bonds. For example, if an investor buys a bond when interest rates are low and then sells the bond before maturity when interest rates are much higher, the investor is likely going to have a large capital loss. The current yield is equal to the bond’s annual interest payment divided by the bond’s current price. It measures the interest component of a bond’s return. The coupon rate has the same numerator as the current yield, but it has the bond’s par value in the denominator instead of the bond’s current price.


Watch the video: SYMBOLS u0026 Iconic Ruins (December 2021).