Log in


<< First  < Prev   1   2   3   4   5   Next >  Last >> 
  • 6 Dec 2018 13:55 | Melanie Rügenhagen (Administrator)

    In this video, Prof. Michael Seadle gives a short introduction to our exhibition “How Trustworthy? An Exhibtion on Negligence, Fraud and Measuring Integrity”.



  • 19 Oct 2018 13:52 | Melanie Rügenhagen (Administrator)
    On the evening of 18 October 2018, we opened the exhibition HOW TRUSTWORTHY? – An Exhibition on Negligence, Fraud, and Measuring Integrity

    in the Jacob-und-Wilhelm-Grimm-Zentrum, the university library of Humboldt-Universität zu Berlin (HU Berlin). Short welcoming speeches by Prof. Dr. Degkwitz (director of the university library), Prof. Dr. Graßhoff (vice dean of the Faculty of Humanities) as well as Prof. Seadle, PhD (Senior Researcher at HU Berlin and Principal Investigator of the HEADT Centre) introduced the urgency of the topic.

    Impressions from setting up the exhibition. Photo credit: Thorsten Beck.

    The exhibition is a joint project of the HEADT Centre and the Berlin School of Library and Information Science at Humboldt-Universität zu Berlin and shows that both human error and deliberate manipulation can compromise the integrity of scholarly findings.

    Interested people can visit the exhibition until 17 December 2018 in the foyer of the library at no charge. The English version of the text can be accessed at any time on our website (here).

  • 10 Oct 2018 13:46 | Melanie Rügenhagen (Administrator)


    On 9 October, Elsevier Connect published the HEADT Centre article Combating image misuse in science: new Humboldt database provides “missing link” by Dr, Thorsten Beck. The article is free to access. You are welcome to read it here!



  • 21 Sep 2018 13:49 | Thorsten Beck (Administrator)

    The goal of this exhibition is to increase awareness about research integrity. The exhibition highlights areas where both human errors and intentional manipulation have resulted in damage to careers, and it serves as a learning tool.


    Cover Image: Eagle Nebula, M 16, Messier 16 // NASA, ESA / Hubble and the Hubble Heritage Team (2015) // Original picture in color, greyscale edited © HEADT Centre 2018

    The exhibition has four parts. One deals with image manipulation and falsification. Another addresses data problems, including human error and fabrication. A third is about plagiarism, fake journals and censorship. The last section covers detection and the nuanced analysis needed to distinguish the grey zones between minor problems and negligence in case of fraud.

    October 18 – December 17, 2018Opening on October 18, 6 pm
    in the foyer of the Jacob-und-Wilhelm-Grimm-Zentrum of Humboldt-Universität zu Berlin

    Geschwister-Scholl-Straße 1/3, 10117 Berlin

    Admission to the exhibition is free.

    An exhibition of the HEADT Centre (Humboldt-Elsevier Advanced Data & Text Center)
    in cooperation with the Berlin School of Library and Information Science
    at Humboldt-Universität zu Berlin

    Cover Image: Eagle Nebula, M 16, Messier 16
    False colored space photography ( 2015)
    NASA, ESA / Hubble and the Hubble Heritage Team
    Original picture in color, greyscale edited


  • 13 Jul 2018 12:00 | Melanie Rügenhagen (Administrator)

    We have published a new HEADT Centre video that introduces how the HEADT Centre is contributing to research on similarity search at Humboldt-Universität zu Berlin. The video is available here:



  • 13 Jun 2018 11:28 | Michael Seadle (Administrator)

    NOTE: Melanie Rügenhagen was co-author of this entry.

    Replication is difficult to apply to qualitative studies in so far as it means recreating the exact conditions of the original study — a condition that is often impossible in the real world. The key question then becomes: “how close to the original must a replication be to validate an original experiment?” (Seadle, 2018)

    This question is particularly important because of the widespread belief that only quantitative research is replicable. Leppink (2017) writes:

    “Unfortunately, the heuristic of equating a qualitative–quantitative distinction with that of a multiple–single truths distinction is closely linked with the popular belief that replication research has relevance for quantitative research only. In fact, the usefulness of replication research has not rarely been narrowed down even further to repeating randomised controlled experiments.” (Leppink, 2017)

    Dennis and Valacich (2014) suggest three categories for replication studies, only one of which is “exact” (see the column from 23 May 2018). The conceptual and methodological categories are both relevant to qualitative research, because the participants and the context can vary as long as the replication tests the inherent goals and concepts, as well as the methodological framework of the original. In other words, successful qualitative replications can provide a confirmation of the hypotheses at a higher level of generalisation. Even when the specific contexts change. What matters is that the concepts and outcomes remain constant. As Polit and Beck (2010) write:

    “If concepts, relationships, patterns, and successful interventions can be confirmed in multiple contexts, varied times, and with different types of people, confidence in their validity and applicability will be strengthened.” (Polit & Beck, 2010)

    These authors support the use of replication in qualitative research, and argue that replication is the best way to confirm the results of a study:

    “Knowledge does not come simply by testing a new theory, using a new instrument, or inventing a new construct (or, worse, giving an inventive label to an old construct). Knowledge grows through confirmation. Many theses and dissertations would likely have a bigger impact on nursing practice if they were replications that yielded systematic, confirmatory evidence—or if they revealed restrictions on generalized conclusions.” (Polit & Beck, 2010)

    How can one ensure that the evidence is systematic? Leppink (2017) suggests that researchers in all kinds of studies have to decide when they no longer need more data in order to answer their research question and calls this concept saturation.

    It is important to remember that qualitative research normally does not generalise about results beyond the community involved in the samples, which sets a very limited and specific context for the research question. At some point researchers need to decide when their question is answered, stop their inquiries, and come to a conclusion. Leppink (2017) writes:

    “If saturation was achieved, one might expect that a replication of the study with a very similar group of participants would result in very similar findings. If the replication study leads to substantially different findings, this would provide evidence against the saturation assumption made by the researchers in the initial study.”

    Saturation means that the answer to a research question is complete, and becomes a core element of the “systematic, confirmatory evidence” (Polit & Beck, 2010) for analyzing validity. It can also help to provide metrics by uncovering the degree to which a study may be flawed or even intentionally manipulated.

    Nonetheless there are barriers. While a range of studies based on the same concepts and methodology can lead to insights about whether a phenomenon is true, not knowing exactly how the original researchers conducted their studies may make replication impossible (Leppink, 2017). This makes describing the methodology particularly important.

    None of this is easy. Replication studies remain a stepchild in the world of academic publishing. Gleditsch and Janz (2016) write about efforts to encourage replicating their own research area (international relations):

    “Nevertheless, progress has been slow, and many journals still have no policy on replication or fail to follow up in practice.”

    The problem is simple. There is no fame to be gained in showing that someone else’s ideas and conclusions are in fact correct, and it is hardly surprising that ambitious researchers avoid doing replications, especially for qualitative research, where the risk of failing is high and succeeding only makes readers think that the original study was done well.

    References

    Gleditsch, Nils Petter, and Nicole Janz. 2016. “Replication in International Relations.” International Studies Perspectives, ekv003. Available online.

    Polit, Denise F., and Cheryl Tatano Beck. 2010. “Generalization in Quantitative and Qualitative Research: Myths and Strategies.” International Journal of Nursing Studies 47 (11): 1451–58. Available online.

    Michael Seadle. 2018. “Replication Testing.” Column on Information Integrity 2/2018. Published on 23 May 2018. Available online.

    Leppink, Jimmie. 2017. “Revisiting the Quantitative–Qualitative-Mixed Methods Labels: Research Questions, Developments, and the Need for Replication.” Journal of Taibah University Medical Sciences 12 (2). Elsevier B.V.: 97–101. Available online.

    Dennis, Alan R, and Joseph S Valacich. 2014. “A Replication Manifesto.” AIS Transactions on Replication Research 1 (1): 1–5.

  • 23 May 2018 11:26 | Michael Seadle (Administrator)

    Testing for Reliability

    The principle that scientists (and scholars generally) can build on past results means that past results ought to be replicable. Brownill et al (2016) write:

    This replication by different labs and different researchers enables scientific consensus to emerge because the scientific community becomes more confident that subsequent research examining the same question will not refute the findings.

    And MacMillan (2017) writes in his editorial “Replication Studies”:

    Replication studies are important as they essentially perform a check on work in order to verify the previous findings and to make sure, for example, they are not specific to one set of data or circumstance.

    Increasingly replication is also seen as a way to test for data falsification, on the presumption that unreliable results will not be replicable; but as with most forms of testing, it offers no simple answer.

    How does Replication Work?

    The ability to replicate results means that those doing the replication need exact information about how the original experiment was carried out. In physics and chemistry this means precise descriptions in lab books and in articles, and the same machines using the same calibration. In the social sciences, it can be much harder to reproduce the exact conditions, since they depend on human reactions and a variable environment. One well-known case comes from a study by Cornell social psychologist Daryl Bem, who did a word recognition test:

    “[Bem] published his findings in the Journal of Personality and Social Psychology (JPSP) along with eight other experiments providing evidence for what he refers to as “psi”, or psychic effects. There is, needless to say, no shortage of scientists sceptical about his claims. Three research teams independently tried to replicate the effect Bem had reported and, when they could not, they faced serious obstacles to publishing their results.” (Yong, 2012)

    The fact that the other research teams could not replicate the experiment successfully did not suggest to anyone that the data were fake (presumably the students could attest to that), but the failure did cast doubt on the apparent “psychic effects”. Since an exact replication using those Cornell students in that class with all the same social conditions was not possible, the question arises: how close to the original must a replication be to validate an original experiment?

    Dennis and Valacich (2014) talk about “three fundamental categories” of replication:

    • Exact Replications: These articles are exact copies of the original article in terms of method and context. All measures, treatments statistical analyses, etc. are identical to those of the original study…
    • Methodological Replications: These articles use exactly the same methods as the original study (i.e., measures, treatments, statistics etc.) but are conducted in a different context. …
    • Conceptual Replications: These articles test exactly the same research questions or hypotheses, but use different measures, treatments, analyses and/or context….

    Since the Cornell students were not available for the replications, the replications presumably come under the “methodological” category, or perhaps even the “conceptual”. Dennis and Valacich (2014)comment: “Conceptual replications are the strongest form of replication because they ensure that there is nothing idiosyncratic about the wording of items, the execution of treatments, or the culture of the original context that would limit the research conclusions.”

    In any case these replication types represent a significant contribution to knowledge by confirming or throwing skepticism on the earlier results. Why then did the research teams have trouble publishing their results?

    Publishing Replications

    Most journals do not encourage replications. A study that strikes readers as new and exciting and generates attention is a plus, whereas a study that appears to cover old ground, even if it has scholarly value, is less likely to get through the peer review process. Lucy Goodchild van Hilten (2015) writes:

    Publication bias affects the body of scientific knowledge in different ways, including skewing it towards statistically significant or “positive” results. This means that the results of thousands of experiments that fail to confirm the efficacy of a treatment or vaccine – including the outcomes of clinical trials – fail to see the light of day.

    This may be changing and the degree to which it is true depends in part on the academic discipline. David McMillan (2017) writes:

    Cogent Economics & Finance recognises the importance of replication studies. As an indicator of this importance, we now welcome research papers that focus on replication and whose ultimate acceptance depends on the accuracy and thoroughness of the work rather than seeking a ‘new’ result.

    If other journals follow this trend, there could be significantly more testing of scholarly results. Nonetheless a problem remains. Except for the design time, replicating results costs almost as much as doing the original experiment and if the results are in fact exactly the same, it is unlikely to be published. Some fields solve the problem with a repeat-and-extend approach where replication is tied to new features that explicitly build on the replicated results. Much depends on the culture of the discipline.

    For all of its problems, replication remains one of the most effective and reliable tools for uncovering flaws and fake data, and should be used more widely.

    Note

    Bem (2015) did a further “meta-analysis of 90 [replication] experiments from 33 laboratories in 14 countries …” which he claims supports his hypothesis. He published this meta-analysis in an open-access journal for the life sciences that charges $1000 for an article of this length, and Bem explicitly declared that he had no grant support. If nothing else, this is a sign of how difficult it is to continue the discourse in standard academic venues.

    Acknowledgement

    Ms. Melanie Rügenhagen (MA) suggested the topic and assisted with the research. Prof. Dr. Joan Luft, provided research content.

    References

    Bem D, Tressoldi PE, Rabeyron T and Duggan M. 2015. “Feeling the Future: A Meta-Analysis of 90 Experiments on the Anomalous Anticipation of Random Future Events.” F1000Research 4:1188. Available online.

    Brownill, Sue, Dennis, Alan R., Binny, Samuel, Tan, Barney, Valacich , Joseph and Whitley, Edgar A. 2016. “Replication Research: Opportunities, Experiences and Challenges.” In Thirty Seventh International Conference on Information Systems. Dublin, Ireland. Available online.

    Dennis, Alan R, and Joseph S Valacich. 2014. “A Replication Manifesto.” AIS Transactions on Replication Research 1 (1): 1–5.

    Goodchild van Hilten, Lucy. 2015. “Why It’s Time to Publish Research ‘Failures.’” Elsevier Connect. Available online.

    McMillan, David. 2017. “Replication Studies.” Cogent Economics and Finance, 2017. Available online.

    Yong, Ed. 2012. “Replication Studies: Bad Copy.” Nature 485 (7398): 298–300. Available online.


  • 16 May 2018 11:24 | Michael Seadle (Administrator)

    Burden of Proof

    Jochen Zenthöfer wrote an article in the Frankfurter Allgemeine newspaper on 18 April 2018 in which he expresses concern about the number of plagiarism cases under consideration at German universities. As he notes, the cases come largely from the VroniPlag Wiki. His article is the focus of this column.

    There is an assumption in most western legal systems that a person is innocent until proven guilty, but, as Zenthöfer (2018) notes, this principle derives from criminal law:

    “Als Grund führt die HU das Prinzip der Unschuldsvermutung an, das freilich nur im – hier nicht einschlägigen – Strafrecht gilt.” [The reason given by the HU is the principle of the presumption of innocence, which only applies in criminal law, and is not relevant here. – my translation]

    The author seems to imply that the presumption of innocence ought to be ignored in a process that could destroy a career and strip a person of the means of livelihood. When the accuser is an official body, such as a commission of a university, that has done a careful analysis and presents a well-founded conclusion, it may be reasonable to put the burden of proof of innocence on the accused, but when a self-constituted group such as the VroniPlag Wiki makes an accusation, the universities involved have an obligation to investigate thoroughly and carefully to see whether the accusation is legitimate.

    Standards

    In conducting an investigation into plagiarism, appropriate standards need to be considered. In talking about “Policies and Initiatives Aimed at Addressing Research Misconduct in High-Income Countries,” Resnik (2013) refers to the COPE guidelines (2018), which define plagiarism as occurring:

    “When somebody presents the work of others (data, words or theories) as if they were his/her own and without proper acknowledgment.” (Cope, 2018)

    While this definition is comprehensive, it gives no explicit measure to determine what actually constitutes plagiarism. Mere text overlap is insufficient. A short factual statement, such as “Berlin is the capital of Germany” gets thousands of hits in a Google search, and could not reasonably be called plagiarism. The phrase in the guidelines about  “proper acknowledgment” is equally inspecific, not merely because citation styles vary, but because expectations vary about exactly how and where in the text to put the reference.

    Paraphrasing is not the same as plagiarism. Lee (2015) explains rules for paraphrasing in the American Psychological Association Style Blog:

    “A paraphrase restates someone else’s words in a new way. For example, you might put a sentence into your own words, or you might summarize what another author or set of authors found. When you include a paraphrase in a paper, you are required to include only the author and date in the citation.”

    This definition leaves latitude for understanding what “in your own words” means, which does not necessarily imply avoiding all the original words and phrases. When paraphrasing, it is almost impossible to avoid content-carrying words or phrases that have a particular meaning. Nonetheless there are plagiarism hunters who see plagiarism in every overlap.

    Metrics

    When evaluating a work for plagiarism, it is important to have rational metrics. Copying a complete paragraph word for word (without quotes) is plagiarism. Copying a complete long sentence word for word suggests plagiarism. A case in which a majority of the words in a paragraph or sentence match words in the same order in another text could be deliberate plagiarism that the author tried to obscure, or it might be a case of good verbal memory or it might be that there was a logic to the order and the word choice. Absolute uniqueness of language is not necessarily the hallmark of good scholarship.

    Companies like iThenticate are very careful only to talk about the percentage of plagiarism in terms of the number of words in the whole work. VroniPlag counts plagiarism in terms of how many pages have hits  (“Anzahl Seiten mit Funden”), which means that even a page with a mere nine words (a set of four words and a set of five words) adds to the page count. (see VroniPlag). This exaggerates the impression of the problem to a point that could be considered misrepresentation in any scholarly work.

    In my book on “Quantifying Research Integrity” (December 2016), I suggest a grey-scale measure for plagiarism cases where the number of contiguous words are measured in a particular unit, such as a paragraph or sentence. One can disagree with the exact numbers, but using transparent metrics as a standard matters. Exactly where copying occurs matters too. It is less surprising to have word overlap in a literature review than in conclusions, and facts and standard phrases in an academic discipline need to be deducted.

    Ultimately decisions about plagiarism depend on the distinction between negligence and gross negligence. The former implies sloppiness, while the latter represents actual misconduct. Hunting for plagiarism may have a game-like quality for those who spend their free time doing it that pushes the volunteers toward judgments that increase the number of hits without regard to the distinction between negligence and gross negligence.

    Harm

    Certainly plagiarism is an ethical and copyright problem, but its long term actual harm to modern scholarship may be modest. The real harm comes to the personal integrity of the person doing the plagiarism. Integrity matters and certainly instances of plagiarism need to be caught, but the current focus on hunting plagiarism may actually be a distraction from the more important task of identifying problems with falsified or manipulated data. False data undermines the foundations of scholarship (especially the natural sciences) in ways that plagiarism does not.

    The popularity of plagiarism hunting grows in part from tools that make it easy to compare texts word for word. Some British universities distinguish between actual plagiarism and the appearance of plagiarism by requiring students to submit their own papers to a plagiarism checker like Turnitin. King’s College London even allows students to submit their works multiple times (see “Submitting Assessments Online“). While this is a measure to prevent plagiarism, it serves also as a recognition that inadvertent copying is common and does not necessarily involve fraudulent intent.

    Acknowledgement

    Ms. Melanie Rügenhagen (MA) assisted with the research.

    References

    COPE (Committee on Publication Ethics). 2018. “Plagiarism.” 2018. Available online.

    Lee, Chelsea. 2015. “When and How to Include Page Numbers in APA Style Citations.” American Psychological Association: APA Style Blog. 2015. Available online.

    Resnik, David B., and Zubin Master. 2013. “Policies and Initiatives Aimed at Addressing Research Misconduct in High-Income Countries.” PLoS Medicine 10 (3). Public Library of Science: e1001406. Available online.

    Seadle, Michael. 2016. Quantifying Research Integrity. Morgan Claypool: Synthesis Lectures on Information Concepts, Retrieval, and Services. Available online.

    Zenthöfer, Jochen. 2018. “Wie Universitäten Auf Plagiate in Doktorarbeiten Reagieren: Auch Mit Diebstahl Kann Man Es Weit Bringen.” Frankfurter Allgemeine, April 18, 2018. Available online.


  • 2 May 2018 11:21 | Michael Seadle (Administrator)

    Justice is often slow. Articles with integrity problems can stay in print without any warning label for years. Chen (2013) wrote:

    “We found that it takes about 2 years, on average, to retract an article and another 2 years to see a substantial decrease of citations to the retracted article.”

    Two years may well even underestimate the time to retraction, since the accusation often triggers formal investigations at universities and at journals, before either institution is ready to take action. As soon as an accusation becomes public, the press typically pushes for swift action, and university authorities typically want to make the problem go away, without much concern for the assumption of innocence that is part of democratic justice systems. One of the constant themes of this column is that integrity problems are sometimes more complex than the accusations imply. Nonetheless two years is a long time, during which ideas can become easily established.

    From a journal perspective, the commercial value of an article declines sharply two years after publication, though value over time varies greatly with the field: humanities articles generally have a longer half-life than articles in the natural sciences or medicine. Most researchers in most fields will have read an article before two years are up, if it is at all relevant to their work. This means that an article that a publisher has retracted after two years has already exhausted a significant part of its commercial value and is intellectually present in the minds of the scholarly community. Two years more for a decrease in citations is hardly surprising, since scholars who read a paper are unlikely to go back to read it again. Likely they have a digital copy or a paper copy and work from that for their own new article.

    Authors may also ignore a retraction for a variety of reasons that may depend on the reason for the retraction. As Madlock-Brown and Eichmann (2015) wrote:

    There are many reasons articles may be retracted, some more problematic than others.

    A work that was retracted for plagiarism, for example, may still contain worthwhile information, despite the ethical and copyright violations. Readers may also discount retractions for procedural or peer review issues. Self-citation plays a role too.

    18% of authors self-cite retracted work post retraction with only 10% of those authors also citing the retraction notice.” (Madlock-Brown & Eichmann, 2015)

    What exactly authors are citing from their own retracted paper may matter. It is not quite fair to assume that everything in a paper is contaminated because of a retraction. The degree to which an integrity violation in one part of a paper affects others may depend on the field. A humanities paper may, for example, draw multiple conclusions, only one of which the retraction affects. The assumption that everything in a retracted paper is flawed is part of the black-or-white thinking that currently pervades the integrity literature.

    The interesting question is whether the flawed portions of a retracted work, especially faked or manipulated data, continues in the minds of scholars after the integrity violation is discovered and established beyond reasonable doubt. Greitemeyer (2014) writes:

    … numerous studies have shown that corrections do not work as intended, in that individuals are influenced in their later judgments by misinformation even after correction. For instance, Loftus (1979) found that after witnessing an event, exposure to misleading information makes a person often report something that was only suggested. This phenomenon has been labeled the misinformation effect…

    In some ways this is not surprising. If the original article made a clear and cogent argument that seemed on the face of it to be reasonable, a memory of and even a belief in the argument may persist.

    Once a belief is formed, people generate explanations that fit the evidence. These explanations continue to imply that the belief is correct even after exposure to evidence that invalidates the evidence once used to support one’s belief.” (Greitemeyer, 2014)

    An interesting example can be found in the retracted study by Diederik Stapel where he asks travelers to choose a chair next to a Dutch-African or a Dutch-Caucasian. (Stapel & Lindenberg, 2011) The data may have been fake, but the conclusion felt so plausible that it remained in the minds of many. Indeed, this reference to a retracted work is an example of why such citations may take place.

    The good news is that researchers who are accused and exonerated may not suffer long term damage to their reputation. Greitemeyer and Sagioglou (2015) writes:

    The present research suggests that people do abandon their attitude toward an accused researcher after learning that the researcher has been exonerated. In both studies, participants in the exoneration condition had a more favorable attitude toward the researcher than participants in the uncorrected accusation condition. Moreover, in the exoneration condition, participants’ post­-exoneration attitude was more favorable than their pre-­exoneration attitude.

    This should be a comforting thought to those who are exonerated, but those cases seem to be rare. Interestingly enough Greitemeyer and Sagioglou (2015) begin with the example discussed in last week’s column, and note: “…it is important to keep in mind that the LOWI concluded that it cannot be determined whether Förster had manipulated the data.” Thus far he has not been exonerated and may well have given up hope. For others it may offer a grain of comfort after a time of stress.

    Acknowledgement

    Ms. Vera Hillebrand (MA) suggested the topic and the title. She also provided most of the references.

    References

    Chen, Chaomei, Zhigang Hu, Jared Milbank, and Timothy Schultz. 2013. “A Visual Analytic Study of Retracted Articles in Scientific Literature.” Journal of the American Society for Information Science and Technology 64 (2): 234–53. Available online.

    Greitemeyer, Tobias. “Article retracted, but the message lives on.” Psychonomic bulletin & review 21, no. 2 (2014): 557-561. Available online.

    Greitemeyer, Tobias and Sagioglou, Christina. 2015. “Does Exonerating an Accused Researcher Restore the Researcher’s Credibility?” PloS One 10 (5). Available online.

    Madlock-Brown, C.R. & Eichmann, D. 2015. “The (Lack of) Impact of Retraction on Citation Networks.” Sci Eng Ethics 21 (127). Available online.

    Stapel, Diederik A, and Siegwart Lindenberg. 2011. “Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination.” Science 332 (6026): 251–253. Available online.

  • 1 May 2018 13:46 | Thorsten Beck (Administrator)

    This video introduces to the image manipulation research carried out at the HEADT Centre. It discusses the relevance of understanding and analyzing images using existing software tools such as Photoshop, ImageJ or Gimp and explains the importance of establishing a comprehensive database that may help teams around the globe to develop and train algorithms for image manipulation detection. The overall aim is to raise awareness and to make detection tools more efficient. Thus the center is going to make a significant contribution to a more thorough understanding of the phenomenon of image manpulation and to sharpen the view on what kinds of manipulation require a closer look.

<< First  < Prev   1   2   3   4   5   Next >  Last >> 
Powered by Wild Apricot Membership Software