During COVID-19, many things happened, and surprisingly, most of them have evaporated from our minds. One fine evening, I was having a conversation with a group of friends, and as we reflected on those years, we realized how two years of our lives had just slipped away. We definitely had to think hard about what we did or how we spent those two years. It almost felt as if those years had vanished from our memory—even more daunting was that this happened during the prime phase of our careers. But one thing I could never forget, something that etched itself into my mind, was the phrase: “We are dealing with an invisible enemy in COVID-19.”
This phrase evokes nostalgia, especially as I’ve become more invested in the world of qualitative research and Artificial Intelligence. This started with me writing a paper about what generative AI means for research after the launch of ChatGPT in November 2022. Now, before you jump to conclusions, hear me out. I know COVID-19 and AI are vastly different. COVID-19 was a dark chapter—a time of loss, despair, and uncertainty—a catastrophic event in modern history. I am not suggesting that AI is comparable to COVID-19. It is not.
However, as I delve deeper into the ethical concerns surrounding AI, especially its use in decision-making processes and research, I am reminded of some aspects of COVID-19. To be clear: I am not saying AI is our enemy. I am, however, relying on one specific word—”invisible.” It’s the unknowns, the uncertainty, the black box effect, the limitations of AI—and the ethical implications—that feel eerily reminiscent of our experience with COVID-19. The early phase of COVID-19 was daunting—panic ensued, misinformation spread like wildfire, and everything hinged on the crisis management policies in place.
When it comes to AI and ethical concerns, I see a lot of similarities in terms of uncertainty and the need for proactive management. AI, even with its endless possibilities, comes with significant limitations. There is a lot of buzz around the ethical issues and implications regarding AI, as well as a lot of assumptions and misinformation. For some, AI is a daunting proposition, a force that could potentially spiral out of control. As much as I believe in the potential of AI, I also believe that if we don’t ask the right questions, if we don’t focus on the ethical implications of AI at all fronts, and if we don’t establish appropriate data governance, regulations, and policies, it won’t take long for AI to become our invisible enemy—another disaster requiring crisis management. The consequences could be endless, as could the missed opportunities if we fail to act responsibly.
Anytime artificial intelligence (AI) is discussed, no matter in what capacity or sector, there’s an elephant in the room: ethics and data privacy. Despite the significant strides made in AI, the conversation around its ethical use and privacy implications lingers like an awkward silence. It’s the uncomfortable topic that, if left unaddressed, threatens to overshadow every breakthrough, no matter how promising. And yet, tackling it feels like a task most want to sidestep—a hot potato no one wants to hold for too long.
Let’s face it: AI companies are not exactly eager to shine a spotlight on ethics and privacy. Why? Because the questions are tough, and the answers are not always flattering or easy to implement. Acknowledging issues around research integrity, privacy, and data governance means opening Pandora’s box—a box that, once opened, demands accountability, significant investment, operational changes, and a level of transparency that can make investors shiver. When profits are the focus, ethical dilemmas can feel like a looming liability.
On the other hand, the average consumer—even the concerned, tech-savvy one—is stuck at AI Ethics 101. The questions that usually arise include: “Is my data safe?” or “Will AI replace my job?” These are valid concerns, but they only scratch the surface of what we should really be asking. If we want to make progress, we need to move beyond these simplistic questions and dive deeper into how AI is reshaping trust, privacy, and autonomy in our society.
So, how do we move beyond AI Ethics 101? It starts with asking better questions. Instead of “Is my data safe?”, we should be asking, “What specific measures are being taken to govern data usage responsibly?” and “How are biases being detected, addressed, and prevented in AI systems?” These questions demand depth, pushing companies and policymakers to offer transparent, thoughtful answers. It’s about shifting from a defensive posture to one of proactive responsibility.
Right now, AI presents challenges that go far beyond surface-level ethical debates. One of the critical issues is research integrity—are AI experiments being conducted responsibly and without harm? The collection and use of personal data also raise concerns that go beyond mere data breaches; it’s about autonomy and informed consent in a world where data is the new currency. And then there’s bias. AI models, trained on data that reflects historical and societal prejudices, perpetuate these biases, leading to unfair outcomes that reinforce inequality.
Thankfully, we’re not in a complete ethical void. Governments, regulatory bodies, and companies are working on frameworks for data governance and ethical AI use. Initiatives like the European Union’s GDPR have made significant strides in protecting data privacy, while organizations like OpenAI have pushed for greater transparency and responsible AI development. Yet, it’s still early days—many of these initiatives have yet to prove their effectiveness or gain the traction needed to create industry-wide change.
The truth is that AI is evolving so rapidly that we don’t even know what we don’t know. And when we are dealing with something like AI, which is constantly evolving and changing, we need to stay on top of it. The policies of yesterday might not be relevant today. Ethical dilemmas we haven’t even thought of will arise as AI continues to advance. Navigating this uncertainty requires caution, adaptability, and humility. We need to acknowledge that we’re operating in an era full of unknowns—and that’s okay, as long as we remain committed to learning, adapting, and responding ethically.
Our goal shouldn’t be to eliminate AI Ethics 101 but to evolve beyond it—to truly dive into the complexities of what it means to build, use, and benefit from AI responsibly. It’s about creating systems that prioritize research integrity, data governance, and fairness, even if we don’t always have all the answers. What’s crucial is that we remain vigilant, proactive, and, above all, committed to the ethical use of AI, even when it’s uncomfortable, expensive, or uncertain. I am aware that there are no answers to all the questions. In fact, there are no answers to many questions. But that’s okay. I truly believe that asking the right questions is the first step toward finding solutions. We don’t need to have all the answers; we just need to ask the right questions to begin with. Let’s ask the right questions, so we can embrace the endless possibilities of AI without finding ourselves crying over its endless consequences.