Can we trust our visual perceptions?
For nearly fifty years, Fred Ritchin has contemplated the future of photography. In 1982, while serving as a picture editor at the New York Times Magazine, he began to observe changes in the medium; by 1984, he wrote an article for the magazine titled “Photography’s New Bag of Tricks,” addressing the implications of digital editing technology in modern photojournalism. Over the years, he has witnessed the evolution from the early stages of digital photo editing to AI-generated imagery, where both amateur and professional users can quickly create realistic visuals using digital tools.
As AI-generated images become more prevalent, Ritchin believes it’s vital for people to discover new methods to verify the reliability of what they see. Naturally, AI imagery didn’t appear out of nowhere. Ritchin connects current discussions about AI best practices to earlier debates on whether journalists should disclose when photos have been altered. In the initial phase of digital editing, National Geographic faced criticism for digitally repositioning the Pyramids at Giza closer together for its February 1982 cover. Currently, National Geographic photographers must shoot in RAW format—a setting that yields unprocessed, uncompressed images—and the magazine enforces strict guidelines against photo manipulation.
Ritchin argues that editors, publishers, and photojournalists must address the challenges posed by AI by establishing clear standards; media and camera manufacturers are starting to create options for automatically embedding metadata and cryptographic watermarks in images to indicate when a photo was taken and whether it has been altered through digital editing or AI modifications. While Ritchin doesn’t advocate for completely rejecting AI, he aspires to restore the unique influence that photography once held in both our personal lives and political contexts.
Take, for example, Nick Ut’s 1972 photograph of a Vietnamese girl fleeing a napalm attack, captured during a time when a single image could captivate global attention. Ritchin mentions that “Gen. William Westmoreland attempted to claim it was a hibachi accident; President Richard Nixon sought to deny it.” Yet, that photograph “contributed to accelerating the end of the war, potentially saving many lives. That’s significant… But today, one might see that and think, Some 14-year-old in a garage could have made that; it’s unlikely to influence my vote.”
Must we accept that machines can make mistakes?
In a particularly humorous instance, a recent study revealed that one of the widely-used AI chatbots has been distributing incorrect coding and programming advice. This highlights a major challenge facing AI at present—these developing algorithms are capable of hallucination, a term describing when a learning model generates statements that seem plausible but are entirely fabricated.
This occurs because generative AI applications, such as large language models, effectively operate as predictive programs. When a question is posed, the AI searches through its extensive knowledge base for relevant information. Then, using that data, it predicts a sequence of words that it believes constitutes the appropriate response to the question. This prediction leads to another prediction—another group of words—that it has been conditioned to anticipate should follow, and so forth.
Rayid Ghani, a professor at Carnegie Mellon University in the Machine Learning Department and Heinz College of Information Systems and Public Policy, explains that this method places greater importance on probability rather than veracity: Most generative AI models are trained on vast amounts of data sourced from the internet, yet no one verifies the accuracy of this data, nor does the AI comprehend what constitutes a reliable source. This explains why, for instance, Google’s AI once mistakenly suggested applying glue to pizza to prevent cheese from sliding off, a recommendation based on a long-standing Reddit joke.
When humans err, Ghani notes, we can easily empathize, as we acknowledge that people are not infallible. Conversely, we expect machines to be accurate. We wouldn’t question a calculator, for example. This expectation makes it challenging for us to overlook errors made by AI. However, empathy can serve as an effective debugging mechanism: these systems are created by humans, after all. By taking the time to analyze not only AI’s processes but also the flawed human processes that influence the datasets it has been trained on, we can enhance the AI and, hopefully, reflect on societal and cultural biases and work to correct them.
How should we address the environmental consequences?
Artificial Intelligence faces a significant water issue—actually, it’s an energy issue. A considerable amount of heat is produced by the energy needed to operate the AI tools that individuals are increasingly utilizing in their everyday personal and professional activities. This heat is emitted by data centers, which deliver the computational power and storage capacity required for these AI systems to operate. Shaolei Ren, an associate professor in electrical and computer engineering at UC Riverside, emphasizes that cooling these data centers demands a vast quantity of water, akin to what tens of thousands of urban residents consume.
“When you take a shower, for instance, the water can be reused,” Ren explains, focusing his research on enhancing AI’s social and environmental accountability. “However, when water evaporates to cool a data center, it is permanently lost.” As legislators rush to implement regulations and hold corporations accountable for their energy and water consumption, Ren insists that understanding the true cost of querying an application like ChatGPT is vital for both individuals and society as a whole.
Even prior to the recent surge in AI, the water and energy demands of data centers had consistently escalated. In 2022, Google reported using over five billion gallons of water across its data centers, representing a 20 percent increase from 2021, while Microsoft saw a 34 percent rise in water usage across the company in 2022 compared to 2021.
AI is set to exacerbate the existing pressure that data centers place on global energy infrastructures: The International Energy Agency projects that by 2026, electricity consumption in data centers will be twice what it was in 2022. While the United States has only recently begun to assess the environmental impacts of data centers, the European Union’s energy commission advanced a regulation in March designed to enhance transparency among data center operators and ultimately diminish reliance on fossil fuels and wasteful resource use.
“I explain it in a way that my child can grasp,” Ren shares. “When you ask ChatGPT one question, it consumes an equivalent amount of energy to that used by a four-watt LED bulb in our house for an hour. Engaging in a dialogue with an AI like ChatGPT for 10 to 50 questions and answers will use roughly 500 milliliters of water, which is about the size of a standard water bottle.”
What steps can we take to prevent AI from worsening past issues?
According to Nyalleng Moorosi, a senior researcher at the Distributed AI Research Institute, as AI processes vast amounts of human-generated data, it reflects the stereotypes, racism, and disparities that still influence society. She points out that these biases often stem from a lack of diversity among those tasked with developing AI systems and tools, who rely excessively on datasets that favor Western notions of valuable information.
The global majority has experienced the imposition of foreign systems, a legacy of colonialism. Moorosi believes that AI might replicate these systems by prioritizing the viewpoints and interests of those in power while sidelining Indigenous knowledge and cultural values.
The teams employed by tech companies frequently have inherent blind spots that they inadvertently embed in their AI tools. Moorosi contends that the key to changing this trajectory is to democratize AI: incorporating the perspectives of individuals who communicate in hundreds of languages and conceptualize in myriad ways that diverge from Eurocentric thought. This approach necessitates shifting AI development away from large tech firms and empowering local developers and engineers to customize tools to address the specific needs and experiences of their communities. Moorosi feels that such systems would pay greater respect to the backgrounds of their creators. Recently, the South Africa-based Lelapa AI, established in 2022, introduced a language learning model that has become the foundation for a chatbot and other innovations aimed at users who speak Swahili, Yoruba, Xhosa, Hausa, or Zulu.
“We must critically examine the issue of power. It’s unrealistic to expect Googlers or OpenAI employees to understand everyone. We cannot rely on Silicon Valley to represent all eight billion of us. The best approach is for each of us to create systems locally,” Moorosi asserts. “My vision for an AI utopia is one in which people have the access and courage to employ AI to tackle their own challenges.”
Leave a Reply