GPT’s Impact on the Software Development Lifecycle

In this article, we explore how Generative Pre-Trained Transformers (GPT) could change the face of transformation for organizations.

Contrary to what many believe, Generative Pre-Trained Transformers (GPT) won’t take the roles of people in the workplace. The truth is that the humans using GPT will replace those who choose not to. There has been a Cambrian explosion in Large Language Model (LLM) solutions leading to a capability overhang that businesses are starting to integrate into their businesses.

 

The one thing all business strategists can agree on is that the ability to adapt to a changing environment is a critical predictor of long-term business viability. However, implementing change in an organization is hard, and extremely hard if there is a legacy of success to overcome. After achieving the necessary internal and market alignment to secure an agreement that change is needed, the organization must then face and overcome its internal change inertia.

 

Usually there is only one limiting bottleneck in any system, so spending time addressing any other area besides this bottleneck tends to be an unbeneficial exercise. Digital transformations are designed to increase the rate of change that can flow through the system, but they often fail due to the difficulty associated with changing the way people behave.

 

In modern organizations, there are three ways to implement change:

  1. Configuration change – involves adapting a parameter that changes the outcome. For online retailers, for example, this might be implemented by adding a third-party product provider and listing their products on your website.
  2. Process change – happens when you change a process that is followed by humans, which usually involves adding or removing a process step. Imagine adding an additional step in your sales process if the discount your salesperson offers goes over, say, 30%, for an exceptionally large deal.
  3. Technology change – takes place when you write or add a new piece of software. If you are a bank, this could involve writing a custom piece of software to allow users to apply for a mortgage using their stocks and shares for the down payment from a single screen.

 

 

Technology change has always been hard-edged and sharp-cornered.

Each method of enacting change comes with a different level of ease. Configuration change is usually the easiest to implement, followed by process change, and finally, technology change. Technology change has always been hard-edged and sharp-cornered. Your code is either running in a production environment or it’s not. Code would traditionally be handed off to the ops team to test and deploy, resulting in context switching and lag between stages.

 

Qualitative impact on SDLC productivity from using Chat GPT

 

The fractional cost of replicating software is quite low, by embedding LLMs into the SDLC, we reduce the fractional cost of producing software. Tools like GitHub Co-pilot & ChatGPT make it significantly easier to change code, providing the engineers on your Dev team with what feels like a smart, eager junior developer who is on hand to help them. This capability comes with the added benefit of not having to create a Jira ticket or a Trello card, or listen to any groaning about changing specs.

 

It is hard to put a precise number on the productivity boost this kind of autonomous support affords without running a like-for-like test. At a minimum, a lot of tedious tasks have been minimized, leaving behind tasks that require the creativity and ingenuity of a human developer.

 

To provide an example, our team was required to integrate with multiple APIs (Application Programming Interface) while using Co-pilot. The program was able to write code that was 75% ready to be deployed, bypassing significant time that the team would have had to spend reading inane API documentation.

Taking a closer look at the technology

 

Co-pilot is based on OpenAI’s Codex, a larger memory version of GPT-3 that is optimized for code. GPT-3 is the large language model that powers the much-discussed ChatGPT chatbot. This particular system is different in that it was created to be a code completion tool, requiring comment prompts to mimic ChatGPT. Our team was able to get ChatGPT to write entire neural networks using PyTorch, tensor flow, and Scikit Learn.

 

At the end of this process, the code was almost 80% ready to execute, marking an impressive step forward. We had the solution fix minor issues at the chat prompt level, and then we asked it for code that could be used to train the network.

 

The challenge of this approach is the reproducibility of the output, based on a non-deterministic system represented by the chatbot. By this we mean the chatbot is often retrained with new data, and if the underlying chatbot changes, different outputs would be generated by the same prompts.

 

We also used ChatGPT to avoid having to read tedious API documentation for a small project. It generated boilerplate code that was almost 100% ready to be used. In the past, I would have had to either read through Stackoverflow or go through the API documentation before reaching this point.

 

As a specific example, integrating the Google Analytics API would require you to read through documentation like this. This can be bypassed by simply asking ChatGPT to write the code for us:

 

 

And sure enough …

 

 

Taking this process, a step further, let’s say we want to get page views by day:

 

 

Installing the Co-pilot extension for IDEs is straightforward, and a guide can be found on GitHub.

Key design improvements ChatGPT offers

 

Design research

 

When designing a new system for customers or staff to accomplish their objectives, strategic designers conduct interviews to understand the journey each user currently undertakes. Before we start, we do desk research to gain an overall understanding of what the jobs and experience are like, asking questions about the job role and industry.

 

This usually takes the team a day or two, sometimes a week. But asking ChatGPT to simply tell us what steps a portfolio manager or ESG analyst undertakes can give us a response in a minute or two. In less than an hour of asking and reading, we can have a fairly structured skeleton of understanding with which to start shaping our research work.

 

 

Design prototyping

 

There are some things that designers are simply fed up with having to design again and again. Desk research is also involved at the outset of this process, usually including Google images to determine how a particular screen has been set up before. A prime example would be a bank’s pay-a-bill screen. Typically, this would require up to an hour of harvesting sources from your own bank to ensure you are reusing familiar design patterns, and another hour or two spent composing a wireframe to share with the wider team.

 

When we asked ChatGPT to design a bank’s payment screen, a list of fields was available to use within a minute. And in another minute, it had given us the html code for that page. When loading it into a browser, we found it functional but unformatted, and despite its basic appearance it was interactive and effective. This effectiveness can be attributed to the AI having used the right html elements and attributes, including usable CSS tags.

 

The biggest benefit we expect from ChatGPT is that it will forever remove Lorem Ipsums from design artefacts. Previously this has been a major problem when there has been no content team to provide the first draft text, presenting another time-consuming task for a designer. Now, thanks to ChatGPT, we all have an assistant smart enough to be asked for “a home page for a retail bank that specializes in loans,” which can give us an excellent first draft of the content and even an interface in just a minute or two. And if, once the tool is coded, we need to add a column with one more field in a table, this is something ChatGPT could help a designer do themselves in a test environment and use it in their usability evaluation without needing a developer’s time.

 

 

Tech-enhanced human expertise

 

On the other hand, the examples used throughout this article demonstrate the transformational advantages of incorporating GPT in the activities of Dev teams, but this is not the sole takeaway. While this technology unlocks significant time and cost efficiencies, it merely enhances the work of skilled developers by enabling them to prioritize more creative, high value challenges. By demonstrating the twofold benefits of harnessing GPT, we simultaneously shine a light on the vital nature of human professionals.

 

It is abundantly clear that GPT is not in a position to make the work of Dev teams irrelevant, and that humans equipped with GPT will probably become the new norm. Areas where we see focus shifting include design validation, security testing, quality assurance, and intellectual property protection. These domains currently rely heavily on human input, and we don’t see this going away any time soon.

 

 

In summary

 

Software has been eating the world for the past 30 years, LLM’s have started to eat the software development process. This article takes a brief look at the impact of ChatGPT (based on the GPT-4 learning model) on designers and software engineers. Of course, this could be assessed across all organisational roles, and the significance of our findings indicates a massive implication for transformational change. With Open AI working on next iterations of GPT, and allegedly up to a trillion parameters, we can expect that GPT-augmented humans will have significant advantage over non-augmented humans.

 

About the Authors

Syed Husain

Principal IT Architect
London, UK

Syed Husain is a Principal Architect with BCG Platinion with more than 15 years of experience in IT consulting. He focuses on Solution Architecture, AI and Data Strategy with Financial Services, Public Sector, and TMT clients in Europe and the Middle East. When not solving critical technology and business problems for clients, Syed can be found perfecting his DOTA 2 and Star Craft 2 skills.

Navin Kumar

Former Lead Engineer
London

Navin is a Software Engineer with 20 years of experience working with large corporations as well as start-ups in the media, social networking, eCommerce, travel and hospitality, and Banking sectors. He has an interest in machine learning and high concurrent systems.