Seems like generative AI is seeping into every aspect of work, including developer and customer experience. Is that a good thing?

According to a Mckinsey survey, 71% of consumers expect companies to deliver personalized interactions and 76% get frustrated when this doesn’t happen.

With software and apps critical to many organizations’ core business today, Artificial intelligence (AI) can alleviate some of the pressure on developers to deliver – by automating routine tasks and providing next-level personalization for end-users.

However, are there any limitations as to what AI can do? Does it have any negative impact on developer and customer experience?

To find out more about the possibilities and limitations of AI for an effective developer experience and the consumer journey, DigiconAsia sought out some insights from Sara Faatz, Director of Technology Community Relations, Progress Software.

How is AI serving developers and end-users in their user experience journey?

Sara Faatz: Due to the increased sophistication of end users, the pressure is on for developers to deliver advanced customization and engaging user interfaces. A 2022 survey by Insider Intelligence found that 88% of respondents ranked customer experience as equally important as their business’ offerings, jumping from 80% in 2020.

With such great expectations, leveraging emerging technologies like AI can act as a foil for developers. For instance, AI-powered code generation can help developers build applications, websites or platforms made for diverse users, faster. According to IDC, this holds the potential to raise developer velocity to meet core business objectives for up to 70% of new digital solutions.

AI can also significantly increase productivity by automating routines to allow for greater focus on strategic goals, which then helps save time and resources.

For end users, AI stands to enable developers to understand their contexts, know their preferences, and meet their needs, which empowers an accessibility-first mindset and people-centric user experiences.

What are some negative impacts and limitations of AI today?

Sara Faatz: For one, AI and machine learning are not replacements for people’s lived experiences or personal knowledge.

There is also the need to acknowledge that trust in AI and ML rests on confidence in the data that supports it. Even with the help of AI, developers must ensure thorough testing to identify their own biases. With generative AI, there is also a real risk of biases that perpetuate historical and social inequalities trickling through into algorithms.

Biased AI can damage organizations’ credibility, but with a policy-driven approach that is empathetic and values human connection these dangers can be mitigated. Leveraging AI to automate decision-making is certainly critical to competitiveness, but true resilience also hinges on companies explaining how these decisions interact with one another.

This transparency is key to understanding the underlying models, expanding knowledge of existing faults of AI and ML models, which can then be used to improve them further.


Sara Faatz, Director of Technology Community Relations, Progress Software.

How is generative AI like ChatGPT changing the consumer, business, and IT landscape?

Sara Faatz: We are at a precipice, whereby access to this technology is now being democratized to end users. The likes of ChatGPT are now giving people interfaces to access generated content that can be used in a myriad of ways – from text to images and programming code.

Not only that, this technology can also be trained to better understand the needs of individual users – which in itself heralds a major turning point for customization and personalization. For businesses, this can improve decision-making and boost customer retention. At the same time, generative AI like ChatGPT can help organizations develop new, more effective business models.

There is also the potential to optimize efficiency in the organization. Not only can resources be allocated better, but staff can also be guided to pivot to the changes in skills that are required to ensure future success. At the same time, there will be new opportunities for professional roles that involve machine learning, data processing and software engineering.

For the IT landscape, these new types of generative AI have the potential to profoundly upend challenges, and aid the development of solutions to some of the most complex problems we face today. They can also significantly accelerate AI adoption, even in organizations lacking the digital architecture. With more organizations deploying technology, there also exists foundations for further digital transformation at the macro level.

However, the key to truly getting the best out of generative AI is to understand how they can supplement what humans do. For instance, while machines can simulate empathy on some level, they cannot truly understand or feel emotions in the way that we do. So it is essential that decision-makers keep these limitations in mind and use them to support, instead of replace, human empathy and understanding.

What should organizations watch out for when adopting generative AI, to avoid issues of law and ethics?

Sara Faatz: As users experiment with generative AI systems, ethical issues must be top of mind. Large generative AI systems like ChatGPT have exhibited significant hidden capabilities or overhangs. These are not planned for during the development phase, and which even developers are oblivious to. The risk from unexpected usage must not be taken lightly and can have serious consequences without the right countermeasures.

As mentioned earlier, there is also a need to ensure biases are accounted for, as much of generative AI’s training data is susceptible to harmful ideas that lurk beneath the surface. This also ties into how generative AI can use completely inaccurate rationales to make convincing, but ultimately, flawed and erroneous arguments. Equipping developers to spot these potential outcomes can limit their impact and frequency in AI models.

Organizations must also move swiftly to put in place policies that tackle data leakage when employees use generative AI. Entering sensitive information into such systems poses the risk of it being incorporated into their models and then becoming public.