Centering Sustainability and Responsibility in Technology: A Conversation with Isha Dua

July 8, 2025

by Sydney Smith

A young woman with long, dark hair smiles at the camera

Isha Dua is a senior solutions architect at Amazon Web Services, where she helps organizations build smarter, more responsible technology. With a background in cloud computing and generative AI, she focuses on developing systems that are not only innovative but also ethical and sustainable. In this conversation, she shares her career journey, reflects on the challenges of ethical innovation, and explains why sustainability and responsibility are vital for the future of technology.

Isha will share her expertise this fall as a guest speaker in the Responsible AI certificate course.

CAN YOU WALK US THROUGH YOUR CAREER JOURNEY?

ISHA: I grew up in Delhi, India. Inspired by my father’s work as a civil and environmental engineer, I developed an early passion for environmental stewardship. But back then, that wasn’t a widely available field in India. So, I pivoted to computer science and looked for ways to intersect environmental sustainability with technology.

I did my undergrad in India and then moved to the U.S. to complete a master’s in computer science at Oregon State University. Right out of school, I joined a company called CDK Global. As a DevOps engineer, I embraced the emerging wave of Docker and Kubernetes, immersing myself in cloud-native technologies just as they were reshaping the industry. This early exposure to transformative tools helped me build a robust foundation in cloud ecosystems, positioning me at the forefront of the containerization movement.

In 2018, I returned to India briefly, but I came back to the U.S. in 2019 and joined Amazon Web Services (AWS) as a solutions architect, which had been a dream job of mine back when I was at CDK.

Since then, I’ve worked with a wide range of AWS clients, designing and deploying cloud-native applications and generative AI models. I help customers grow by understanding their goals and challenges, and guide them in architecting resilient, scalable solutions. I’m especially passionate about ensuring this work is done in an environmentally sustainable and ethically responsible way.


WHEN DID YOU FIRST BECOME INTERESTED IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE?

ISHA: It was a gradual journey. There wasn’t a specific point where I said, “Now I want to do AI.” A big part of my job was working with different industries, some of which were very old. For example, when I worked with manufacturing customers, I dealt with a lot of legacy technologies that had been around for decades. These companies had problems that forced me to think outside the box.

Recognizing AI’s growing impact, I expanded my expertise from cloud to machine learning, driven by customers’ modernization needs. I leveraged Amazon’s internal resources, community knowledge, and hands-on experimentation to master this emerging technology, transforming industry challenges into opportunities for innovation.

Machine learning was fun and interesting. I didn’t come from that background, so it was all fresh learning on the job. It was exciting to see a model I built make an accurate prediction. And from there, my passion grew. Over the past year and a half, I’ve been knee-deep in generative AI.


WHICH ACCOMPLISHMENTS OR PROJECTS STAND OUT AS THE MOST IMPACTFUL IN YOUR CAREER?

ISHA: About two to three years ago, I joined two internal AWS technical field communities: one focused on machine learning and the other on sustainability. I’m a generalist solutions architect, covering everything from computing and storage to networking and databases. But these communities allowed me to dive deeper into other areas I was interested in.

An interesting project I worked on was with a U.S. telecommunications provider. We created a geospatial machine-learning algorithm using satellite data from NOAA and the UK Met Office to optimize cell tower placement. The model analyzed environmental factors like flood risk and deforestation to support long-term infrastructure planning.

The project was quite technically challenging because the data was messy and hard to access. But once we got the data cleaned and got the model running, we were able to see real results and impact. This model contributed to this company’s strategic planning for long-term infrastructure, resilience, and sustainability. Not to mention, several other telecommunication providers ended up using our base model as well. 

That project became my catalyst. It was the sweet spot between environmental engineering and computer science. Since then, I’ve been writing articles and blog posts on sustainable machine learning and the intersection of sustainability and technology. And I’m now working on content about how to make the generative AI lifecycle more sustainable.


WHAT TOPICS DO YOU WRITE ABOUT TO TEACH OTHERS ABOUT SUSTAINABILITY AND TECHNOLOGY?

ISHA: My content is very technical—I write for engineers, developers, and architects who are building models and systems. I want them to understand how to make more resource-efficient, cost-performant, and operationally balanced choices.

Consider cloud computing’s hidden inefficiency: organizations typically utilize only 50-60% of their provisioned resources, leading to substantial waste of both computing power and budget. That waste contributes directly to your carbon footprint. I want to teach people how to use their resources more effectively and efficiently.

Additionally, I talk about topics like using ARM-based vs. x86 processors, auto-scaling efficiently, storing data in tiers, deleting unnecessary data, and choosing regions powered by renewable energy like wind or solar farms.

And for generative AI, I encourage using smaller models that require fewer resources instead of defaulting to the biggest, most popular ones. Often, smaller models are just as effective—and more sustainable.


HOW DO YOU APPROACH THE CHALLENGE OF BALANCING TECHNICAL INNOVATION WITH ETHICAL RESPONSIBILITY?

ISHA: It starts with embedding responsible practices into every stage of development. Whether you’re a developer, architect, or executive, you should be thinking about responsibility from the beginning.

When we work with generative AI customers—especially those building their own foundational models like Anthropic’s Claude or OpenAI ChatGPT—we emphasize the importance of evaluating legal, ethical, and copyright risks. This is especially critical in industries like media and entertainment, where content ownership can get very complex. We make sure customers understand the terms of service, review the legal implications, and encourage them to be proactive in managing risks. Responsible AI practices help us guide that.

The pace of innovation is incredibly fast right now, but it shouldn’t come at the cost of societal values. For example, a language model should never differentiate based on gender, race, or other demographics. Bias—whether racial, gender-based, or otherwise—needs to be addressed from the very beginning.

That’s why we make it a point to include these considerations early in the technology development lifecycle. It’s not just about building applications—it’s about embedding ethical practices into the entire process. That’s what we’re aiming to do.


WHAT KNOWLEDGE AND SKILLS DO YOU THINK ARE ESSENTIAL TO HAVE WHEN NAVIGATING THIS AI AND TECH-DRIVEN WORLD?

ISHA: From an engineering perspective, you need to build a solid foundation. That means reading up on core concepts in AI and machine learning, understanding common architecture patterns, and learning what responsible tech really entails. But beyond just reading, you need to get hands-on—experiment with these models, play around with them, and explore their features. That’s how you really begin to understand how they work.

For leaders, I believe some level of technical fluency is also important. They also have to strike a strategic balance between innovation and risk management. Strong public communication skills, thought leadership, and the ability to continuously learn and adapt are all crucial. Honestly, something new comes out every single week—it’s even hard for me to keep up sometimes!

 

WHAT CHALLENGES DO ORGANIZATIONS FACE WHEN TRYING TO IMPLEMENT AI EFFECTIVELY AND RESPONSIBLY?

ISHA: Generative and agentic AI are still relatively new. It’s only been around for a few years, so one of the biggest challenges for leaders is justifying the return on investment. These models are expensive, and they require a significant amount of resources to train and maintain. We’re still in such an early stage that most organizations haven’t had enough time to properly articulate what ROI actually looks like. Leaders are struggling to show how these technologies are translating to business value.

There are also serious legal and ethical complexities. Organizations have to ensure transparency and authenticity in model outputs. At the same time, they need to constantly manage the underlying technology, which means they’re continuously upskilling their teams. 

Balancing performance, cost, and sustainability in this environment isn’t easy. And to complicate things further, there are still very few formal regulations around responsible AI. Companies are operating in a space without strong legal guardrails. So the question becomes: how do you build something today without risking legal or ethical issues in the next few years?


WHAT IS ONE TAKEAWAY YOU WANT STUDENTS TO REMEMBER FROM YOUR TALK IN THE RESPONSIBLE AI COURSE?

ISHA: I want to help them understand what responsible AI really means. It’s important for students to realize that the future of AI doesn’t just depend on how advanced the technology is. It depends on how principled its development is. You have to balance innovation with sustainability, societal impact, and ethics.

The systems we build should benefit all humans—not just certain demographics or regions. That’s what responsible AI is about. It’s not just about building powerful tools; it’s about building them thoughtfully and in a way that benefits everyone.