Log in


  • 20 Mar 2017 11:35 | Melanie Rügenhagen (Administrator)

    Prof. Dr. Michael Seadle led a panel discussion on “Examining Research Integrity” at the International Symposium of Information Science (ISI) 2017 in Berlin. Among the panelists were Dr. Thorsten Beck (HEADT Centre), Prof. Dr. Gerhard Dannemann (Humboldt-Universität zu Berlin), Prof. Dr. Wolfram Horstmann (Niedersächsische Staats- und Universitätsbibliothek Göttingen), and Prof. Dr. Debora Weber-Wulff (Hochschule für Technik und Wirtschaft Berlin).

    Photo - From left to right: Prof. Dr. Michael Seadle, Dr. Thorsten Beck, Prof. Dr. Gerhard Dannemann


  • 17 Mar 2017 13:06 | Thorsten Beck (Administrator)


    Watch our new infotorial on image manipulation and research integrity:



  • 2 Mar 2017 12:40 | Thorsten Beck (Administrator)

    ImageJ is a public domain image-processing program developed under the auspices of the United States National Institutes of Health and especially popular among biologists. It was first published in 1997 and is designed to support a large range of image processing and analysis tasks and the detection of image manipulations in many different types of images, such as „three-dimensional live-cell imaging to radiological image processing, multiple imaging system data (and) comparisons to automated hematology systems.“ (Wikipedia: https://en.wikipedia.org/wiki/ImageJ)

    One of its advantages is that ImageJ is able to read many different image file formats, such as TIFF, PNG, JPEG, BMP and many more. The program supports operations like edge detection, contrast manipulation, the measuring of distances and angles, sharpening, smoothing, rotation and many other (for more details see https://imagej.nih.gov/ij) and it allows the processing of stacks.

    Downloading and installing the program and the associated InspectJ plugin is fairly simple. Just follow the instructions on the ImageJ website or watch one of the many tutorials on Youtube, for example a series provided by the Zentrum für Molekulare Biologie at Universität Heidelberg:

    All you then have to do is to drag and drop an image for analysis onto the ImageJ tool bar in order to start the InspectJ plugin and follow the instructions of the program. ImageJ asks the user to decide whether an image has a dark or light background and then automatically produces an analysis image for each of the three RGB color schemes (green, blue, red). The user is free to decide which one of the layers should be analyzed – or whether all of the layers should be included in the analysis.

    Fig. 1: Analysis images for each of the three RGB color schemes (red, blue, green) are generated. In this example most of the information appears to be found in the red and green schemes (the analysis of the blue scheme does not generate any useful results).

     

    Once a layer for analysis is chosen, the program starts to process a set of predefined actions and a video sequence can be watched, which may reveal some of the otherwise hidden features of an image (for example edges).

     

    Example: Gel Electrophoresis Analysis

    For this evaluation, I chose an electrophoresis image that I retrieved from Wikimedia Commons (https://commons.wikimedia.org/wiki/File:Gel_electrophoresis_2.jpg).

    This is the original (still unaltered) image:

    Fig. 2: Gel electrophoresis: 6 “DNA-tracks”. In the first row (left), DNA with known fragment sizes was used as a reference. Different bands indicate different fragment sizes (the smaller, the faster it travels, the lower it is in the image); different intensities indicate different concentrations (the brighter, the more DNA). DNA was made visible using ethidium bromide and ultraviolet light. Author: Mnolf, Innsbruck, Austria, 2006


    Creative Commons Attribution-ShareAlike 3.0 Unported https://creativecommons.org/licenses/by-sa/3.0/


    For the purpose of testing I decided to copy and paste some areas of the bands and to erase some unwanted areas:

    Fig. 3: Manipulated electrophoresis image.


    Figure 4 specifies the details of the manipulations:

    Fig. 4: Copied and pasted areas (rectangular shapes) and erased areas (oval shapes).


    Here is a short video in which the subsequent steps of the analysis procedure are shown:


    This is what the result of the analysis looks like once it is completed:

    Fig. 5: Result of analysis (red color scheme): The ImageJ program with InspectJ plugin produces an analysis image with highlighted ROI’s (regions of interest), a log chart that presents the analysis results (in this case for example „identical areas“) and a ROI manager that allows to inspect each one of the results separately.


    Discussion of results (Red Color Scheme):

    The software specified regions of interest that are suspect of inappropriate image manipulation. The red color scheme analysis correctly reveals identical items at area 1 and 4 (with a 2-5 % tolerance setting) – which are actually those areas which were copied and moved, but it does not reveal the similarities between 1 and 3 or 3 and 4. Moreover, with a 10% tolerance setting it indicates a similarity between area 7 and 11, which was not part of the carried out manipulations.

    Fig. 6: Result Analysis: Red Scheme

    Fig. 7: Result Analysis: Green Scheme

    Overview: Analysis results at different tolerance levels for the analysis of red and green color schemes:

     

    Red Color Scheme Analysis:

    Tolerance Level Results: Identical Items
    0 0
    2 1 & 4
    5 1 & 4
    10 1 & 4; 7 & 11

     

    Green Color Scheme Analysis:

    Tolerance Level Results: Identical Items
    0 0
    2 1 & 2; 1 & 3; 1 & 8; 2 & 3; 2 & 8; 3 & 8
    5 Same as 2
    10 Same as 2

     

    Here is an example for how identical areas in an image are summarized in the log file:

    Fig. 8: Log File for Red Scheme Analysis: Identical Items found at 1 & 4; 7 & 11.


    Discussion:

    This evaluation revealed how ImageJ helps with the identification of identical regions within an image. Still, some areas that have been highlighted in the process of analysis do not appear to be identical, but the program has found and indicated most of the copied regions – which might be due to inappropriate tolerance values.

    It should be mentioned that the results of the green and red scheme analysis are not fully identical and that both of them have to be evaluated in order to derive a correct conclusion. Moreover, processing the JPEG image file with the InspectJ plugin indicated identical regions, but did not identify the erased areas (the oval forms, see Fig. 4). In one of the next blog posts, I am going to test the ImageJ solution for detecting erased areas.

    More tools will be evaluated soon – please visit HEADT.EU for upcoming posts.

     

    © HEADT CENTRE 2017

  • 1 Mar 2017 11:25 | Melanie Rügenhagen (Administrator)

    We set up a YouTube channel for the HEADT Centre where we make videos available that we produce to introduce our research as well as to share our research progress. We published our very first video yesterday: An Introduction to Research Integrity.



  • 24 Feb 2017 11:23 | Melanie Rügenhagen (Administrator)

    Our team working on Research Integrity at the HEADT Centre is currently producing short videos to introduce the core concepts that play a role in this field of research. Michael Seadle, Thorsten Beck, and I visited the recording studio at the Erwin-Schrödinger-Zentrum in Berlin-Adlershof, which belongs to Humboldt-Universität zu Berlin (HU). Both HU’s natural science library and the Computer and Media Service (CMS) are located there.


    Michael Seadle (left) and Melanie Rügenhagen (right) at the recording studio of HU's Erwin-Schrödinger-Zentrum.

    The CMS has professional staff dedicated to assist with the entire recording procedure that is done in addition to providing voice. They helped us a lot in getting the sound right, detecting reading mistakes and redo passages that sounded unclear. We spent about half a day reading the scripts we had written, edited and proofread in advance. The first results should be ready soon, and we will publish the videos on our website along with a note on this blog.

  • 17 Feb 2017 15:25 | Thorsten Beck (Administrator)

    The US Office of Research Integrity (ORI) in the United States has been occupied with research integrity related issues for well over two decades. The office was established in 1992 by consolidating the „Office of Scientific Integrity“ and the „Office of Scientific Integrity Review.“ Today the Office of Research Integrity oversees many of the „Public Health Service (PHS) research integrity activities on behalf of the Secretary of Health and Human Services.“

    For more information on the history and agenda of ORI, please visit: https://ori.hhs.gov/

    Among its many aims are the development of “policies, procedures and regulations related to the detection, investigation, and prevention of research misconduct and the responsible conduct of research.” Over the years the office developed a wide spectrum of activities and services, like interactive videos, case studies, web modules, and many other forms of outreach and tools.

    One of the resources the office provides for the wider academic audience are the so-called “Forensic Tools” for image analysis and manipulation detection, which provide customized standard operations and recordings of standard routines in Adobe Photoshop. (https://ori.hhs.gov/forensic-tools)

     

    Forensic Droplets and Actions

    ORI started with the development of such tools for Photoshop CS2-CS3, which leads to the fact that many of these routines can not be assessed with later versions of Photoshop. The office offers droplets and actions: droplets are „small desktop applications (…) that automatically process image files that are dragged onto their icon” whereas an action “is the sequence of steps that was pre-recorded in Photoshop” and is started from inside the program.

    The “Forensic Tools” package comprises routines for the enhancement of certain image qualities and features, like droplets for the lighting up of areas or the adjustment of gradients. Other droplets allow for the comparing of two images, or provide overlays for images with dark or light backgrounds. Basically, the aim is to enhance features to support visibility, so that hidden details in an image (like obscured background information, artifacts or edges from copy-paste operations) may be revealed.

    Other than the droplets the actions for Photoshop are claimed to be compatible with later versions of the program. However, the office provides many operations – too many to deal with all of them in detail in this blog post – therefore I am first going to first focus on the “Advanced Gradient Map” and thereby demonstrate some of the underlying logics of working with such actions.

    For the purpose of testing I composed two different forms of manipulations of an image of a wallpaper to demonstrate the capacities or limitations of such standard operations.

    Fig. 1: The original image: Wallpaper with decorative floral design. Photo was taken at Mirow Castle, Mecklenburg, Western Pomerania with a Sony Alpha 6000 ILS Camera.


    Manipulation 1: Copy-Move Forgery

    Fig. 2: Image detail of the original image without manipulation.


    Fig. 3: Details obscured through background cloning.


    Fig. 4: Copying and moving floral element with magic wand tool.


    Fig. 5: Resulting manipulated image with copied floral elements (highlighted).


    Manipulation 2: Cut & Paste Manipulation

    The second manipulation is less subtle and easier to spot even with the naked eye.

    Fig. 6: Copied and pasted area (highlighted).


    Fig. 7: Detail of copied and pasted area. The cutting edge is clearly visible even without further enhancements.


    Image Analysis with Forensic Photoshop Action: Advanced Gradient Map

    Processing an image with the Advanced Gradient Map action simply requires to drag and drop the image onto the action item. This automatically initiates a number of steps: first the color image is reduced to grey scales, then the user is free to manually adjust curves and finally the action offers a set of color tables that may be helpful for further enhancing contrasts.

    Analysis: Manipulation 1

    Here are some screenshots I took while executing the action:

    Fig. 8: Adjusting curves.


    Fig. 9: Choosing a suitable color scheme for contrast enhancement


    This is how the resulting image looks like:

    Fig. 10: Result of Analysis


    Result: The Advanced Gradient Map action allows for new perspectives when analyzing image content. Still, the action did not help to reveal the subtle changes that were carried out with the clone stamp and the magic wand – they simply remain invisible. Note: the result could have looked completely different with another color scheme. There is not just one option.

    Fig. 11: Detail from resulting image: No clear evidence of a manipulation.


    Analysis: Manipulation 2

    Now let’s have a look at the second manipulated image in which the pasted region appears to be easier detectable. Again, the analysis is carried out applying the automatized steps of the action, but with different curve adjustments and color scheme. This is how the resulting image looks like:

    Fig. 12: Advanced Gradient Map action: result with cut and paste manipulation

    Clearly in this case the Advanced Gradient Map supports the identification of the manipulation. The edges around the pasted area are enhanced through the “light rainbow” effect and thus clearly visible.


    Fig. 13: Detail from Advanced Gradient Map action


    Conclusion:

    As has been shown in this quick evaluation the ORI action palette opens up a wide range of opportunities when it comes to image manipulation detection – respectively the visual examination of images or the visualization of hidden features. However, it should be mentioned that the ORI customized the action palette for analyzing specific images, especially for Western blots and other images from the field of biomedicine. Nevertheless, the manipulations carried out on the wallpaper images above are in some way comparable to manipulations of blot images. Duplicating image areas or obscuring certain aspects are a serious concern in many biomedicine research integrity cases.

    Other than the “Forensically” tools that we have discussed earlier on this blog Photoshop actions support the enhancement of certain features within an image, but can not be understood as a way of automatically detecting image manipulations. On the contrary, applying the actions requires a “trial and error”-attitude and the user has to invest time and effort to produce satisfying results. As it seems there are manipulations that cannot easily be traced with the Advanced Gradient Map action tool, but which require other tools – tools which will be discussed on this blog soon.

     

    The HEADT Centre 2017

  • 24 Jan 2017 14:52 | Thorsten Beck (Administrator)

    The Clone Detection Tool

    Another tool available on the website “Forensically” (https://29a.ch/photo-forensics/#clone-detection) helps with the detection of clones. Like the Error Analysis Tool, which was discussed earlier on this blog, the Clone Detection Tool includes a set of levels that enable the user to identify manipulated areas in images.

    Before discussing this tool, we need to have a closer look at another instrument – the “clone stamp” in Adobe Photoshop. This instrument makes it easy to significantly alter images in a way that the average viewer tends not to recognize. The clone stamp makes it possible to reproduce and to overwrite the “natural” texture of a certain area in an image. Detecting image manipulations in all kinds of images (e.g., artistic or scientific) not only requires a basic understanding of the tools that are designed for detection, but of the tools and practices that made the manipulation possible. In other words, whoever wants to detect manipulations must be able to produce them.

    This is why I include three examples of how to work with the clone stamp:

    Erasing Artifacts with the Clone Stamp in Adobe Photoshop

    Example 1:

    Fig. 1: Blossom and Bee

    The idea of the first experiment is to make this picture a little more dramatic by erasing elements surrounding the blossom on the left and on the right.

    Example 2:

    Fig. 2: Schaffhausen Boat Scene


    The second image shows a boat near the Rheinfall in Schaffhausen. I decided to let the boat disappear completely using the clone tool.

    Example 3:

    Fig. 3: Outdoor Scene with Shadows

    The third image manipulation is a little more complicated, since it is not trivial to reproduce consistent color flows. The plan is to hide two of the shadows in the foreground (keeping only the shadow in the middle), and to erase the figure in the background.

    Before I present and discuss the final results of the manipulations, here are some screenshots that document how I altered the images.


    Overview of manipulations:

    1_Blossom and Bee

    Fig. 4: Erasing of background texture


    Fig. 5: Reproducing flower petals.


    2_Schaffhausen Boat Scene

    Fig. 6: Cloning the water surface gives the impression as if the boat were sinking.


    Fig. 7: Half of the boat vanished.


    3_Outdoor Scene

    Fig. 8: Image with replaced shadows in the foreground.

    Figs. 9-11: Cloning the background figure. Because the figure is located in front of different background textures (like the tree, the fence, the rocks and the path), it is crucial to copy information from each of these elements to produce a somewhat natural impression.


    THE CLONE DETECTION TOOL

    Now, here are the three images after the manipulations:

    Fig. 12: The image has a little less background texture, and duplicating the petals highlights the blossom.


    Fig. 13: The boat is gone and the manipulation not easy to trace.


    Fig 14: A closer look makes it possible to see the manipulations in the foreground. The lighting on the right appears unnatural and there are visible traces of the clone tool. The erasing of the figure in the background seems less easy to trace.

    Let us now see whether the Clone Detection Analysis Tool can reveal these manipulations. For the purpose of comparison, I analyzed both the cloned and the original image to give an impression of how the results differ.


    Example 1: Blossom and Bee

    Fig. 15: It turns out that the clone detection tool covers most of the manipulations. The tool highlights the duplicated/cloned flower petals and it reveals some other similarities in background structures (not all of them represent manipulated areas). It does not, however, highlight any of the rather drastic background manipulations (I erased parts of the background on the left and enlarged the dark area on the right – best compared with the original below).


    Here is how the Clone Detection Tool analyzes the unaltered version of the image:

    Fig 16: Although the tool traces some similarities, the difference between original and manipulated image remains clearly visible. Overall, the tool helps revealing many clones – except for those in the background texture.


    Example 2: Schaffhausen Boat Scene

    Fig. 17: In analyzing the second image, the analysis highlights certain areas of the image, but since the texture of the water surface is naturally repetitive, it is not easy to tell the manipulated from the untouched parts.


    Fig 18: The comparison is useful in this case, because it shows that the manipulated areas are highlighted in different ways than other highlighted areas. One possible conclusion is: whenever the detection tool highlights a section strongly, this may be interpreted as evidence for cloned areas.


    Example 3: Outdoor Scene with Shadows

    Fig. 19: The last example shows some of the tool’s limitations. The tool introduces a paradox: it highlights areas in the image that have not been touched, and it does not identify areas that the human eye can easily identify as suspicious.



    Fig. 20: Most of the textures and elements in the image are highly repetitive, which may be a reason why the algorithm does not detect the cloned areas. What is obvious: the clone detection tool does not work well in this case.

    Summary:

     

    The help section of the “Forensically” website says: “The clone detector highlights copied regions within an image. These can be a good indicator that a picture has been manipulated.” (https://29a.ch/photo-forensics/#help) As the above experiments have shown, there are clear limitations to the capacities of the tool. Yet none of the tools on “Forensically” claim to reveal all manipulations under all circumstances. They rather promise to make it easier to identify where to look closer.

    All in all, out of the three examples discussed in this blog post, there is only one in which the tool clearly highlighted repetitive areas (Example 1), another one in which the tool indicated areas within the image that the algorithm marked as suspicious (Example 2), and one in which the algorithm remained oblivious, if not misleading (Example 3).

     

    More tools will be evaluated soon – visit headt.eu for upcoming posts.

    HEADT CENTRE 2017


  • 13 Jan 2017 14:33 | Michael Seadle (Administrator)

    Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation.

    Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated.

    This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start.

    The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.


  • 12 Jan 2017 14:06 | Thorsten Beck (Administrator)

    Over the last decade inappropriate image manipulations have become a serious concern in a variety of sectors of society, such as in the news, in politics or the entertainment sector. Digital image editing programs are nowadays very powerful and constantly change how we produce and understand images. In academia images play a very important role and due to a number of fraud incidents image manipulation gained more and more attention.

    Given the number of scientific papers that contain problematic images (without necessarily representing fraudulent intend) and the fact that many retractions happen due to the inappropriate use of images there is definitely a need to take effective measures against inappropriate manipulations. But this is far easier said than done, since often is not trivial to

    1. a) tell what exactly makes a manipulation inappropriate, and
    2. b) to detect manipulations which were carried out.

    However, this is the first of a series of blog posts that deal with the simple question of how image manipulations can be detected. The field of research that is occupied with the detection of image manipulation is called ‘image forensics’. Forensic experts analyze whether there is evidence that makes an image suspicious and they gather all the clues that can possibly be found in order to make informed judgments about the appropriateness of the image. This can include aspects like compression, meta data, or lighting. It can be executed through mere observation or by applying suitable algorithms. Forensic analysis plays a practical role for insurance companies, in crime investigations and all thinkable fields in which images possess evidential value.

    There are a number of free online resources available on the web that promise to support image analysis. Collections of forensic tools are available at https://29a.ch/photo-forensics ; http://fotoforensics.com/ ; or at http://www.getghiro.org/ to only name a few.

    THE ERROR LEVEL ANALYSIS TOOL

    One tool that all of these collections include is “Error Level Analysis” (hereinafter: ELA). Jonas Wagner, the developer of “Forensically” explains the tool on his web site as follows:

    “This tool compares the original image to a recompressed version. This can make manipulated regions stand out in various ways. For example they can be darker or brighter than similar regions which have not been manipulated.“ (https://29a.ch/photo-forensics/#help)

    The tool is designed to identify those areas within an image that are on a different compression level. When manipulations have been carried out on a JPEG image (e.g. with elements added or removed) the ELA analysis tool is expected to identify and mark all of the manipulated regions, since the resave of the image puts the original image and the added element on different compression levels.

    Let us now see how this works out in practice:

    Below you see the unaltered (but downsized) version of a random snapshot I took last summer near the river Elbe in Saxony, Germany, with a Sony Alpha 6000 Camera:


    This is the ELA analysis result of the digital image using the free online resource “Forensically”:

    The image appears consistently dark with only a few regions standing slightly out because of the original lighting. The edges of the objects appear a little lighter than the rest of the image and in some areas they show a little violet touch, while the sun appears as a black uniform stain. I then uploaded the original image to Adobe Photoshop and added and changed a number of features in the picture. For example I included a PNG of the moon and duplicated it on the upper left side of the image. Moreover, I inserted a swarm of birds and removed some disturbing stains from the glass in the foreground with the Photoshop erasure tool and, last not least I copied the flower from the milk can and duplicated it (see images below).

    List of manipulations:

    1_Added and duplicated moon on different saturation levels:

    2_Added birds:

    3_Removed stains (you can clearly see the round marks of the erasure tool):

    4_Copied and pasted flowers on the milk carton

    This is how the resulting image looks like:

    Note: I did intentionally not alter any of the standard settings, like contrast or brightness over the entire image, since ELA results could be affected.

    After I carried out these rather basic manipulations I saved the image from Photoshop as JPEG and uploaded the image to “Forensically” to analyze it with the ELA tool (the tool only allows the analysis of JPEG and TIFF images). This is how the result of the ELA analysis looks like:

    Here are some details:

    Moon area: Clearly visible. It is noteworthy that the ELA analysis tool shows all 5 objects highlighted uniformly and does not represent the different saturation levels.

    Added birds: The highlighting is clearly recognizable.

    Flower area: Edges appear almost like the edges of unaltered objects in the photo. The highlighting of the copied and pasted objects is not clearly distinguishable from other structures in the image. In other words, if I did not know about the manipulation I would not have recognized it.

    Removed stains on the glass: The area appears almost like the unaltered version. From the ELA analysis, traces of the erasure tool are not recognizable.


    SUMMARY:

    This short experiment showed some of the strong and some of the weak sides of the ELA analysis tool. The tool clearly identified elements that were introduced to the picture after a single resave. (Any further resave would decrease the quality of the JPEG and consequently influence the ELA result.). ELA did not reveal other manipulations like the copying of the flower element or the removing of stains on the glass, which definitely limits the usefulness of the tool. However, since ELA at least allowed to identify some of the manipulations it can be recommended as one possible tool to start with when analyzing images. Still, the user should be aware that regions which stand out do not necessarily imply manipulation. Jonas Wagner points out in the help section of “Forensically”: “The results of this tool can be misleading (…)”.

    Another aspect that must be mentioned is that it requires a good bit of experience before you get results. The levels are not self-explanatory and interpreting the ELA results definitely requires some visual training (as well as reading through tutorials). One interesting insight I gained from working with this tool is that whenever a JPEG is uploaded to Photoshop it acquires a characteristic “rainbow effect” which can better be observed with the levels slightly altered, like in the example below (JPEG Quality 90, Error Scale 53, Magnifier Enhancement: None, Opacity at 0.64).

    The characteristic rainbow effect that reveals an image has been uploaded to Photoshop (or another Adobe Product):

    In sum, Error Level Analysis opens the mind of the user for a more systematic evaluation of what is visible and what can be hidden in a picture. It helps revealing some of the hidden features, but is definitely not a tool that can stand for itself or that produces all-inclusive forensic results for the uninitiated user.


    More tools will be evaluated soon – please visit HEADT.EU for upcoming posts.

    HEADT Centre 2017

  • 29 Nov 2016 14:27 | Michael Seadle (Administrator)

    Prof. Michael Seadle gave two lectures via Skype on 24 November 2016 about the research integrity work of the HEADT Centre to students in the Scientific Writing in English Course at the National Library of Technology/Czech Technical University in Prague.

Powered by Wild Apricot Membership Software