Unless you've been living under a rock for the past few years, you will have noticed the "early internet" levels of hype around AI.
Although human psychology tends to oscillate between extremes, such as greed and fear, there can be no denying that AI has changed and will continue to change our world in fundamental and extraordinary ways.
The very notion of what it is to be human, is being challenged and will naturally evolve, as we incorporate AI into more aspects of our daily lives.
As with all such technological revolutions, it is natural for people to be knocked off-balance and to feel anxious about the impact it will have on their lives.
Whilst entirely understandable, if you're reading this, I would like to offer you some words of encouragement.
Firstly, the fact you're reading this article, puts you way ahead of most people, who still have their head in the sand, with respect to the enormous societal change, which is rolling over us.
Secondly, as with all previous revolutions of this kind, human beings have shown an extraordinary resilience, resourcefulness and adaptability.
Thirdly, if you adopt the right mindset at this inflexion point in history, you will have a golden opportunity to ride the waves of innovation, which at this very moment, are beginning to form beneath the surface.
More so than ever before in history, if you can think it, you can do it. And let's not forget, as software engineers, we are uniquely positioned to take full advantage of the abundant opportunities, which AI has to offer.
So, now that we've reminded ourselves of the enormous opportunity, which lays before us and the fact that we're uniquely positioned to benefit from it, let's consider the best and most sustainable way of using AI in software engineering.
We will consider 4 guiding principles.
Although human psychology tends to oscillate between extremes, such as greed and fear, there can be no denying that AI has changed and will continue to change our world in fundamental and extraordinary ways.
The very notion of what it is to be human, is being challenged and will naturally evolve, as we incorporate AI into more aspects of our daily lives.
As with all such technological revolutions, it is natural for people to be knocked off-balance and to feel anxious about the impact it will have on their lives.
Whilst entirely understandable, if you're reading this, I would like to offer you some words of encouragement.
Firstly, the fact you're reading this article, puts you way ahead of most people, who still have their head in the sand, with respect to the enormous societal change, which is rolling over us.
Secondly, as with all previous revolutions of this kind, human beings have shown an extraordinary resilience, resourcefulness and adaptability.
Thirdly, if you adopt the right mindset at this inflexion point in history, you will have a golden opportunity to ride the waves of innovation, which at this very moment, are beginning to form beneath the surface.
More so than ever before in history, if you can think it, you can do it. And let's not forget, as software engineers, we are uniquely positioned to take full advantage of the abundant opportunities, which AI has to offer.
So, now that we've reminded ourselves of the enormous opportunity, which lays before us and the fact that we're uniquely positioned to benefit from it, let's consider the best and most sustainable way of using AI in software engineering.
We will consider 4 guiding principles.
You have ultimate responsibility
No matter what AI tools you use or how you use them, if something breaks, you are responsible.
Imagine a scenario in which you ship a bunch of AI generated code that you don't fully understand and it causes an unexpected bug in production, which negatively impacts your paying customers.
Those customers aren't going to care about AI's involvement in the problem. Nor are your engineering managers or the folks who are tasked with running your company.
They're going to want the problem solved and they're going to expect you and your fellow engineers to get it solved very quickly.
This is not a situation that you ever want to find yourself in. It can be hard enough tracking down and fixing bugs, which have been introduced by humans, let alone having to trace through thousands of lines of AI-generated code, which you don't understand.
So, whenever you're working with AI tools, which is something that I would absolutely encourage you to do by the way, make sure you understand what they're doing.
If in doubt, don't ship it!
Imagine a scenario in which you ship a bunch of AI generated code that you don't fully understand and it causes an unexpected bug in production, which negatively impacts your paying customers.
Those customers aren't going to care about AI's involvement in the problem. Nor are your engineering managers or the folks who are tasked with running your company.
They're going to want the problem solved and they're going to expect you and your fellow engineers to get it solved very quickly.
This is not a situation that you ever want to find yourself in. It can be hard enough tracking down and fixing bugs, which have been introduced by humans, let alone having to trace through thousands of lines of AI-generated code, which you don't understand.
So, whenever you're working with AI tools, which is something that I would absolutely encourage you to do by the way, make sure you understand what they're doing.
If in doubt, don't ship it!
Write great tests
Needless to say, having a great test suite makes it so much easier to catch bugs proactively and to refactor code with confidence.
With the rise of AI code suggestions and the addition of AI tools / agents to our codebases, it is more important than ever that we embrace test driven development.
Code suggestions can be sense-checked fairly easily, as these can be accepted with caution, amended as necessary and scrutinised, as part of the pull request review process.
Tools and agents are a different story though.
As someone who typically works with Ruby on Rails applications, I recently started working with the awesome RubyLLM gem.
One of the key benefits of this gem, is that it allows you to create tools, which enable LLM's such as ChatGPT to retrieve relevant data from your application, so it can provide intelligent responses to domain-specific questions.
If, having created a bunch of tools, you then combine those tools together and give them to an AI agent, which is capable of taking relevant actions, such as emailing a bunch of users when a particular thing happens, you need to be extremely careful that this doesn't lead to unexpected or undesirable outcomes.
How do you mitigate the risks associated with AI tools and agents providing misleading responses and / or taking potentially damaging actions?
Realistically, there is also going to be an elevated level of risk associated with this type of application development. With that being said, by taking a conservative approach, writing great tests and shipping one small improvement at a time, you can reduce the likelihood of things going wrong and retain the ability to refactor things with confidence.
With the rise of AI code suggestions and the addition of AI tools / agents to our codebases, it is more important than ever that we embrace test driven development.
Code suggestions can be sense-checked fairly easily, as these can be accepted with caution, amended as necessary and scrutinised, as part of the pull request review process.
Tools and agents are a different story though.
As someone who typically works with Ruby on Rails applications, I recently started working with the awesome RubyLLM gem.
One of the key benefits of this gem, is that it allows you to create tools, which enable LLM's such as ChatGPT to retrieve relevant data from your application, so it can provide intelligent responses to domain-specific questions.
If, having created a bunch of tools, you then combine those tools together and give them to an AI agent, which is capable of taking relevant actions, such as emailing a bunch of users when a particular thing happens, you need to be extremely careful that this doesn't lead to unexpected or undesirable outcomes.
How do you mitigate the risks associated with AI tools and agents providing misleading responses and / or taking potentially damaging actions?
Realistically, there is also going to be an elevated level of risk associated with this type of application development. With that being said, by taking a conservative approach, writing great tests and shipping one small improvement at a time, you can reduce the likelihood of things going wrong and retain the ability to refactor things with confidence.
Use autopilot with caution
This principle is intended to reinforce the first two principles, whilst also taking things a step further.
I remember a time when you had to ask other people for directions if you got lost.
Now I'm not saying that I'd like to go back to a time before GPS because it's made mine and everyone's life far easier and less stressful.
With that being said, if I was tasked with driving from point A to point B and my phone died, effectively disengaging autopilot, I would be able to figure things out and get to my destination, even if I had to stop and ask a few people for directions along the way.
I feel the same principles can be applied to software engineering.
It's great that so many tools exist to make our lives easier. With that being said, if we're not careful we can become dangerously dependent upon them and that can have damaging consequences.
It's not just we could flounder, if those tools were taken aware from us. It's the fact we can become lazy, in a way that adversely affects our quality of work.
Whilst it's great that we can use calculators to solve maths problems in milliseconds, there is a lot to be said for knowing how to solve those problems manually.
With all this having been said, I would encourage you to use autopilot with caution, as constantly sharpening your mental faculties will make you a better problem-solver and engineer.
Find the balance.
Don't be lazy. Be great.
I remember a time when you had to ask other people for directions if you got lost.
Now I'm not saying that I'd like to go back to a time before GPS because it's made mine and everyone's life far easier and less stressful.
With that being said, if I was tasked with driving from point A to point B and my phone died, effectively disengaging autopilot, I would be able to figure things out and get to my destination, even if I had to stop and ask a few people for directions along the way.
I feel the same principles can be applied to software engineering.
It's great that so many tools exist to make our lives easier. With that being said, if we're not careful we can become dangerously dependent upon them and that can have damaging consequences.
It's not just we could flounder, if those tools were taken aware from us. It's the fact we can become lazy, in a way that adversely affects our quality of work.
Whilst it's great that we can use calculators to solve maths problems in milliseconds, there is a lot to be said for knowing how to solve those problems manually.
With all this having been said, I would encourage you to use autopilot with caution, as constantly sharpening your mental faculties will make you a better problem-solver and engineer.
Find the balance.
Don't be lazy. Be great.
Design for the future
The best book I've ever read about software design is Practical Object-Oriented Design In Ruby by Sandi Metz.
One thing, which is guaranteed in software development, is that the requirements you currently have, will change in the future.
We don't know exactly when or how they will change, but we can rest assured that they will change and that we will need to adapt to them.
The true test of our application, comes when we're asked to change things down the road.
If, when making changes to your codebase, you feel that you're constantly getting bogged down in the mud of complexity, you have failed to future-proof your application, on account of poor design choices.
Whilst it's great that modern tools allow us to spin up new applications and ship new features at lightning speed, I would encourage you never to lose sight of the need to maintain your codebase over the long-term.
A little bit of additional time and effort, invested in making great design choices upfront, will save you enormous amounts of time, energy and stress down the road.
One thing, which is guaranteed in software development, is that the requirements you currently have, will change in the future.
We don't know exactly when or how they will change, but we can rest assured that they will change and that we will need to adapt to them.
The true test of our application, comes when we're asked to change things down the road.
If, when making changes to your codebase, you feel that you're constantly getting bogged down in the mud of complexity, you have failed to future-proof your application, on account of poor design choices.
Whilst it's great that modern tools allow us to spin up new applications and ship new features at lightning speed, I would encourage you never to lose sight of the need to maintain your codebase over the long-term.
A little bit of additional time and effort, invested in making great design choices upfront, will save you enormous amounts of time, energy and stress down the road.
Summary
In future articles, I will dig deeper into the specifics of AI tooling and how we can use these, to turbo-charge our productivity as engineers.
As a prerequisite to that however, I felt it was important to set things in context and consider the need for doing things in a manner is that responsible, well-considered and sustainable.
I hope you've found it interesting.
As a prerequisite to that however, I felt it was important to set things in context and consider the need for doing things in a manner is that responsible, well-considered and sustainable.
I hope you've found it interesting.