649
Views
29
CrossRef citations to date
0
Altmetric
Editorial

HUPO Brain Proteome Project: aims and needs in proteomics

&
Pages 1-3 | Published online: 09 Jan 2014

Proteome and proteomics have a good chance to be voted as the new decade buzzwords in life sciences. The term ‘proteome’ was introduced in 1994 by the Australian Marc Wilkins, describing all the proteins expressed by a genome in a given cell at a given time. Within just a few years, proteomics became the new hope of the scientific community.

Proteomes are highly dynamic as they differ in gender, cell type and tissue. They may also change due to aging, medication and other external influences. The study of proteomes is referred to as ‘proteomics’ and is normally performed via 2D gel electrophoresis (protein separation) and mass spectrometry (protein identification). In a standard approach, the proteomes of two stages (e.g., healthy vs. diseased tissue) are compared in a differential proteome analysis, thereby revealing differentially expressed proteins Citation[1]. Further characterization of such proteins (i.e., protein function or involvement in disease) will hopefully reveal biomarkers for the disease, which may be used for early onset diagnosis, or possibly serve as therapeutic targets Citation[2]. In summary, this is the theory.

Why genomics was easily standardized

Scientists working in the genetics field have certain advantages (at least within today’s view) over those in proteomics. For example, in genetics the object of analysis (i.e., DNA) exhibits a degree of binarity or, to be more exact, quaternarity. The researcher will get a stream of the base pairs adenine, guanine, cytosine or thymine; modifications of the nucleo-tides are of no interest and are often not even detected via standard sequencing procedures. In addition, DNA can be easily obtained from any cell or tissue, remains relatively stable under normal laboratory conditions and, even more importantly, can be amplified using PCR. Despite the fact that the human genome consists of more than 3 billion base pairs, the task of deciphering the code of life could be divided among participating groups simply by distributing the chromosomes to the different researchers. Due to the stability and accessibility of DNA, the step towards standardization and standard operating procedures has been comparably fast and effective. Nevertheless, this work has been tremendous and a giant step forward towards understanding the genes, offering the basis for modern science.

Why proteomics is not easily standardized

Unfortunately, genomics has its limitations. First of all, there is no strict linearity from gene to protein, as one cannot deduce from the DNA sequence which gene is transcribed into mRNA and, if so, in what amount and of what stability. Second, the efficiency of sub-sequent translation into proteins can only be estimated, whereas the number of possible protein species is multiplied by (alternative) splicing of the mRNA and post-translational modifi-cations (e.g., phosphorylation or glycosyl-ation). As a consequence, the knowledge of the gene equipment of a given tissue does not reveal per se why this tissue has undergone, for example, a cancer development. In order to elucidate such processes, proteins have to be studied in the context of age, gender and developmental stage. The highly dynamic nature of proteomes leads to the need to analyze several stages simultaneously. Here, a whole bundle of new challenges have to be considered. For example, proteins and particularly their modifications show low stability and the disposition of degradation. This makes, for example, -special kinds of phosphorylation difficult to detect. In contrast to different DNA strategies, proteomics is not easily suited to high-throughput techniques, although the enormous diversity would demand exactly this, leaving proteomics to handmade analysis or expansive robotic streets. Within a relatively short timeframe, the new hype of the postgenomic era has, in part, outrun the necessary elaboration of common standards in sample preparation, analysis and data formats. As a consequence, extreme differences between laboratories working at the same tasks are the obvious outcome resulting in publications that are sometimes hard to repeat in one’s own laboratory – simply due to the different strategies or preparation used Citation[3,4]. Obviously, common standards, annotations and interfaces are difficult to create and are thus still missing.

The incoherent results and the enormous complexity of the tasks have led to a degree of omics weariness, particularly among the commercial partners who demand fast and reliable output. They also argue that the common proteomics approach will result in gathering long lists of protein names as well as immense data sets that confuse rather than lead to new insights. This situation could actually have contributed to the innovation deficit that is currently evident in the pharmaceutical industry. However, as R&D costs increase exponentially, the number of new drugs steadily decreases annually Citation[5]. To overcome these problems, design of an appropriate strategy must be carefully considered.

How to solve the challenge

The most striking task to start with is standardization Citation[6]. Although it may not be feasible to elaborate fixed standard operating procedures for all imaginable setups and questions, the key parameters of each experiment have to be annotated at least, so that possible differences can be traced back to variable steps in the chain of work. For example, when analyzing an Alzheimer’s disease mouse model it is important to:

Care for standardized breeding and to precharacterize mice (males are normally preferred as females exhibit changes due to menstruation)

Handle all samples in exactly the same way (tissue removal, storage, subfractionating and processing)

Apply fixed parameters and thresholds when separating and identifying proteins

Use the same search algorithms (software) and compatible data formats

Repeat the experiment with distinct samples as often as required to show reproducibility (at least five times)

Some of these items cannot be maintained employing human material: each one of us is supplied with a diverse set of genes (polymorphisms) and has undergone a different history within their lifespan, entailing varying proteomes. This might be solved via studying numerous human samples and using statistical methods, also meaning that as much information as possible about the person must be collected (age, -gender, health history and medication).

Nevertheless, it should be kept in mind that differentially expressed proteins are not necessarily involved in the studied disease, but have to be validated and characterized with sub-sequent methods to elucidate protein function, protein–protein interactions and affected pathways.

The HUPO Brain Proteome Project

It is clear that the aforementioned tasks cannot be performed by one group alone. Therefore, in 2001 the Human Proteome Organisation (HUPO) was initiated as a pedant of the Human Genome Organization (HUGO) Citation[7,101]. HUPO is a nonprofit organization promoting proteomic research and analysis of human tissues. Several initiatives have been established under the roof of HUPO that analyze the proteome of a distinct human organ, for example, the Plasma Proteome Project (PPP), the Liver Proteome Project (LPP) Citation[8] and the Brain Proteome Project (BPP) Citation[6,102].

The HUPO BPP is chaired, structured and organized by Helmut E Meyer (Medical Proteom-Center, Bochum, Germany) and Joachim Klose (Charité, Berlin, Germany). The project started in 2002, recruiting scientists from all over the world. The HUPO BPP initiative unites some of the world’s leading laboratories offering contact and inter-action with the newest technologies and developments in the neuro(proteomic) field. The postulated vision of the HUPO BPP is the understanding of the pathologic processes of the brain proteome in neurodegenerative diseases and ageing. This will be achieved by deciphering the normal brain proteome, correlating the expression pattern of brain proteins and mRNA as well as by the identification of disease-related proteins involved in neurodegenerative diseases.

A pilot phase has started that addresses a proteome analysis of mouse brain of three different ages (all samples obtained and prepared by one source) and a differential quantitative proteome analysis of biopsy and autopsy human brain samples Citation[9]. A total of 18 different groups from Austria, Belgium, China, Germany, Greece, Korea, Ireland, Switzerland, the UK and the USA are analyzing these samples. This group constitutes the initiation point of the BPP standardization efforts that are merged with those of the HUPO Proteome Standardization Initiative (PSI) Citation[10]. Data from the pilot studies and subsequent work will be collected at a newly developed Data Collection Center, offering a close interface with actual scientific research. Also in agreement with HUPO PSI, profound data reprocessing will occur comparing the results of the different groups, the different pilot studies and even with other HUPO projects. With the help of new insights and annotations, new standards and proposals will be elaborated and used in the subsequent master phase. This phase will start with a Mouse Workshop promising to evaluate suitable and available mouse models for neurodegenerative diseases. Proteins that will be found in several independent models could prove to be more likely specific disease biomarkers than those of only one model. In terms of diagnosis it would be desirable to identify human brain-derived proteins secreted into body fluids that can easily be obtained from patients Citation[12]. Therefore, the systematic study of human body fluids is another central concern of the HUPO BPP.

At the same time, strategies concerning the next steps in characterizing candidate proteins will be elaborated, serving as the initial pool for subsequent diagnostics and drug development.

Proteomics visions

Proteomics is a powerful key tool to uncover proteins involved or associated in (neurodegenerative) diseases. It has to be kept in mind that standardization and statistical review of the results are essential milestones in any (proteomics) approach. In addition, candidates have to be validated and further characterized via complementary techniques. Keeping these two points in mind, it could be possible to elaborate a suitable (high-throughput) workflow, to gain more understanding of the human brain with the help of this machinery as well as to develop approaches fighting neuro-degenerative diseases (via diagnosis in body fluids or generation of new drugs).

Acknowledgements

Parts of the HUPO BPP work are funded by the German -Ministry of Education and Research (BMBF).

References

  • Marcus K, Schmidt O, Schaefer H, Hamacher M, van Hall A, Meyer HE. Proteomics – Application to the Brain. Neuhold L (Ed.), International Review of Neurology, (2004).
  • Wilkins MR. What do we want from proteomics in the detection and avoidance of adverse drug reactions. Toxicol. Lett.127, 245–249 (2002).
  • Bibl M, Esselmann H, Otto M et al. Cerebrospinal fluid amyloid β peptide patterns in Alzheimer’s disease patients and nondemented controls depend on sample pretreatment: indication of carrier-mediated epitope masking of amyloid β peptides. Electrophoresis25, 2912–2918 (2004).
  • Terry DE, Desiderio DM. Between-gel reproducibility of the human cerebrospinal fluid proteome. Proteomics3, 1962–1979 (2003).
  • Zolg JW, Langen H. How industry is approaching the search for new diagnostic markers and biomarkers. Mol. Cell. Proteomics3, 345–354 (2004).
  • Meyer HE, Klose J, Hamacher M. HBPP and the pursuit of standardization. Lancet Neurol.11, 657–658 (2003).
  • Hanash S. Building a foundation for the human proteome: the role of the Human Proteome Organization. J. Rev. Proteomics3, 197–199 (2004).
  • Hanash S. HUPO initiatives relevant to clinical proteomics. Mol. Cell. Proteomics 3, 298–301 (2004).
  • Hamacher M, Klose J, Rossier J, Marcus K, Meyer HE. Does understanding the brain need proteomics and does understanding proteomics need brains? Second HUPO HBPP Workshop, Paris, France. Proteomics7, 1932–1934 (2004).
  • Bluggel M, Bailey S, Korting G et al. Towards data management of the HUPO Human Brain Proteome Project pilot phase. Proteomics8, 2361–2362 (2004).
  • Steinacker P, Mollenhauer B, Bibl M et al. Heart fatty acid binding protein as a potential diagnostic marker for neurodegenerative diseases. Neurosci. Lett.370, 36–39 (2004).
  • Wiltfang J, Lewczuk P, Maler M, Bleich S, Smirnov A, Kornhuber J. Neurochemical early and differential diagnostics for Alzheimer’s disease.MMW Fortschr Med.146, 38–40 (2004).

Websites

  • Human Proteome Organisation www.hupo.org (Viewed January 2005)
  • HUPO Brain Proteome Project www.hbpp.org (Viewed January 2005)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.