On January 31, 2025, BMC faculty members Shehryar Saharan and Nicholas Woolridge organized an AI information session for faculty and students. The session aimed to review the current state of AI, address ethical and practical concerns, and introduce helpful AI tools. The presentation, titled Artificial Intelligence in Scientific Visualization: A Catastropportunity, highlighted the dual potential of these technologies—catastrophic in some contexts and applications but also presenting opportunities.
Contributors, including second year student Priya Modi, faculty Dave Mazierski and Derek Ng, and sessional lecturers Man-San Ma and Alexander Young, showed a variety their own experiments with and applications of generative AI in scientific visualization.
AI tools and techniques have gained public attention in recent years with the release of chatbots like ChatGPT and Claude, and image generation tools like Dall-E and Stable Diffusion. These tools leverage advancements in AI techniques, such as neural networks, and the increasing power of specialized computer hardware.
ChatGPT and Claude are large-language models (LLMs) trained on vast corpora of written information. Image generation models, trained on billions of images, often produce uncannily artistic-looking images. The availability of these technologies raises concerns about their impact on artists, writers, and other workers potentially displaced by their outputs.
Medical illustrators are particularly concerned about whether these tools pose a threat to their profession. The workshop addressed some of these ethical concerns:
Intellectual property: Many large datasets are sourced from human creators without consent or compensation.
Privacy: Personal data can be inadvertently included in the training data due to the insatiable demand for data.
Accountability: Most AI systems lack transparency about the source of their models or training data, making it challenging to assess ethics or assign responsibility for issues.
Environmental impact: Training and running AI models are highly energy- and water-intensive
Human flourishing, equity, and human exploitation are all concerns that arise when considering the impact of AI on society. For instance, AI models can inadvertently reflect biases present in the culture from which they are developed, leading to potential discrimination and unfair treatment. Additionally, many individuals employed to review and validate AI outputs are often found in developing countries, where these positions may be characterized by low wages, unfair conditions, or exploitation.
In the context of medical illustration, there are also practical concerns associated with the use of AI tools, particularly image generation. One significant challenge is the accuracy of AI systems. They can sometimes generate inaccurate or misleading images, which could potentially lead to errors or poor outcomes in medical applications. Another concern is the novelty of AI-generated content. While our field often involves depicting new biological phenomena, procedures, and devices, AI, which essentially remixes and regurgitates what it was trained on, is currently struggling to produce truly novel creations.
Furthermore, organizations that rely on inaccurate AI-derived materials may face legal liability for errors or adverse consequences. Additionally, copyright issues arise with AI-generated images, as they are currently not protected by copyright law. This means that clients may have to consider the possibility that their AI-generated images could be reused or altered by others.
There are tools available that by chance or design do not run into the ethical or practical concerns mentioned. These more ethical tools are typically trained on ethical datasets and have a more assistive rather than generative nature. For example, one recent tool from Adobe can separate glass reflections from objects behind the glass in photographs. Another tool integrated into the animation program Cascadeur can infer skeletal motion from video sources, which is particularly beneficial for 3D animation of human characters. Both of these tools were developed ethically and have the potential to accomplish tasks that were previously impossible or extremely challenging.
Workshop co-organizers Saharan and Woolridge hope to offer another session in Summer 2025.