Is AI Hurting Software Engineers?
The Half StacksMarch 21, 2025x
2
00:44:5541.12 MB

Is AI Hurting Software Engineers?

AI is making coding faster and more efficient, but at what cost? In this episode of The Half Stacks Podcast, we dive deep into the ethical dilemmas surrounding AI in software engineering.As AI reshapes the software industry, where should we, as engineers, draw the line between convenience and craftsmanship, ethics and efficiency?

[00:00:00] Welcome to The Half Stacks with Paul and Steph, your go-to podcast for tech jabbers. On today's episode, AI and software engineering. What has your experience been with AI? Based on, you know, there's kind of two different tracks of kind of working on AI as an actual product and kind of delivering some of the UI that drives it has been interesting.

[00:00:30] But then there's also the fact of kind of integrating it into your workflow. And that I'm still kind of developing my processes there. Yeah. Like there's certain things that it's great for. And I find it really helpful. And there's other things where I just won't be like, no, go away. I haven't yet tried to integrate AI into my developer workflow.

[00:00:58] Because the impression I've gotten from other people is that it writes a lot of stuff for you. And I honestly don't want that. Right? I like programming. I like coding. I like the act of typing instead of tabbing. I'm not saying it's bad. I'm just saying that from what I hear, it is not a process of working that I would enjoy.

[00:01:25] I think you need to re-examine how you're using it then. Well, it's fine. Because you're afraid of it as taking away from your development when it can be adding to your development. Yeah. It doesn't have to be a replacement. It can be an enhancement. I think... Because for me, the place I use it the most is when it comes to unit testing.

[00:01:54] Where a lot of times you're kind of repeating a lot of things just with different sets of data. Yeah. And so it makes the writing of unit tests a lot more efficient. I'm not against AI, even though a lot of times it does seem like I am very anti-AI. But I'm not against it. Like, if I'm struggling on something, I have no hesitations on asking ChatGPT to help me figure it out. I'm not anti-using AI.

[00:02:24] I just think that, you know, when you make your money from programming and, you know, that skill that you have spent so many years trying to develop and become really good at. When you hand it to AI, it seems counterintuitive to also worry about AI taking your job. Which I personally don't worry about AI taking my job.

[00:02:53] I think I am far better at programming than AI is. But, you know, it's like, it's a common sentiment right now that AI is going to take over software engineering. Right? And it's like, okay, that may or may not happen. But if you don't, if you like programming and you like your job, you would think that you would do your job. Right? Yeah.

[00:03:21] Or do you think it replaces you or cheapens you? Probably cheapens. I don't think, I think we're a long ways off from AI being a self-driven software engineer. I think, at the end of the day, computers are dumb. Right?

[00:03:40] Like, you can write functions to output something or, like, you know, you can build a chat program with chat GPT. But, at the end of the day, you're the one that needs to initiate the conversation in some way in order for that output to happen. Like, in everything programming, there is an input and there's an output. Right? Like, someone has to trigger it. Right? Right. But, I don't know. I really value programming as a skill.

[00:04:10] But, like, for actual active programming, I think it's just, I would rather use the skills I worked so hard to build up over the years than hand it over to AI. Because, most use cases for, like, using something like Cursor or Copilot is so that you're faster. So, you're giving up your programming for the sake of speed.

[00:04:38] And, I don't know. I just don't think... I don't think of it as a way to increase your output. I think of it as more of a, where do I want to be spending my energy on? Like, is this a useful place for me to be kind of spinning my wheels and doing things? Is the struggle I'm going to get out of this going to make me a better developer? Then I'm probably going to spend time there.

[00:05:05] And I'm going to try to learn more about what's going on and really look at that problem that's in front of me. But if it's a case of, I'm really going to get no value out of this. It's just, I need to get this done. Then, I'm sorry, but AI can help out with that. And, like, it's not a case of, you know, something that's going to prod. I have a less of a tendency to use it there.

[00:05:30] Mainly just because I know that AI hallucinates and it kind of does things that it shouldn't. And so, I would rather my skill set be placed on something like that. And so, I pay attention to that. Sure. I just don't want software engineering to become super AI driven. I can see how it can change how programming happens.

[00:05:54] But I never just want to be writing prompts in an editor and reviewing code. Like, reviewing code. Like, let's be honest. Because sometimes, it's hard to even just review a human PR. Let alone an AI PR where you are just betting that it's going to be crappy code, right? Well, but I mean, realistically, I can see, like, a product person going in and being like, here are all my requirements.

[00:06:24] You know, generate the code for me. And then handing it off to a developer and being like, I did your job for you. Aren't you happy? It's like, no, you really didn't do my job for me. Because we work with frameworks that your code does nothing on. And so, you know, again, like, I think there's a skill set that you have to have to do your job. But an AI can't replace that. But it can enhance it.

[00:06:50] Playing around with, like, ChatGPT or any of the other AI's, like, they can build really simple apps. But the directions you give them have to be super specific, right? You know, I've built a couple of things with, like, ChatGPT. Not in my editor, but just, like, through conversation, copying and pasting code.

[00:07:12] At a certain point, you know, they don't have a large enough context window to maintain all the features that you want. So unless that problem is solved and they can remember all the things, I think a product manager or someone that is a non-programmer is going to run into being limited by the context window.

[00:07:37] Like, you can't build any more features without, you know, ChatGPT forgetting all the things that, you know, you have told it in the beginning, right? Yeah. I think that's one thing. But I also see a lot of developers being like, oh, I'm not going to touch that. Right. Like, they're almost kind of scared of it.

[00:08:01] And it's like, the way I've kind of been looking at it is more like learning to use Google for the first time. Like, you know, when you first started using search engines, you kind of did, you know, these random, you know, questions or terms to kind of figure out what you want to learn. And, but as you get more and more used to it, you know how to fine tune your search term to get the results you want.

[00:08:29] And I think that if you're not learning how to kind of prompt the AI to get what you need out of it and learning that skill set, eventually, you know, AI is going to become a part of our job. Yeah. And fighting it is not going to help you in the long run. It's going to hurt you. Yeah. I think you should, at the very least, know how to get what you want out of AI.

[00:08:52] Whether that is all entirely possible right now is questionable because it's still under semi-rapid evolution, right? Yeah. So, to me, I think kind of developing that skill set of how to prompt the AI to kind of get what you need and out of it is probably the best idea. You know, because like there's certain things where like I'm not worried at writing shell script.

[00:09:19] Like it's not my top skill and I don't really want it to be my top skill. And so if I'm doing something really complex when it comes to, you know, writing a simple script, you know, AI can help with that. I don't need to, you know, really spend a lot of time there. And so that has been immensely helpful to me. I guess at the end of the day, everything has this time and place, right?

[00:09:42] Like I'm not against AI if I'm just building something super small, something I'm probably just going to use for a little bit. If you're not, if it's not your product and it's not something you are going to care too much about and it's just for you, it's like that's fine, right? Like it really depends on how much you care.

[00:10:04] But I think for something where you care a lot or it's your job, I think it's worth the time and effort to do it yourself. When it comes to like the job portion, you kind of seem more hesitant about using AI. Because of the accountability factor. Right. But let's say that you reframe that and looked at another profession.

[00:10:29] So let's say that you're looking at a writer and they're using it to kind of generate ideas or copyright, not copyright, to kind of validate copy. Would you think lower of that writer because of that? Or is it because of it being in your own industry and profession? You want the truth? I've never been a writer. So let's just put that out there.

[00:10:57] If I heard of a book or if I read a book that I knew the writer was like using AI to like... Okay, there's different levels, right? If they used AI to produce the text of their book, like word for word, I would definitely look down upon that person. If they used AI for research, probably I wouldn't look down on them.

[00:11:25] Because is that any different than using Google for research or reading a book for research, right? I just think that if you're automating to a point where you're not putting any effort into what you're producing, it's not you producing it anymore, right? So if you're basically writing everything yourself, but using it to either double check your work or accent your work.

[00:11:55] Yeah, like I think fact checking is a good thing. But you should also fact check whatever AI you're using. Oh yeah, definitely. Like ask it to give you the URLs of the sources because for all you know, it came from some Geocities website from 10 years ago, right? I mean, you have examples of people in the real world using AI, not fact checking it, and then going into their actual profession being like, Oh, here's this thing I did. And someone's like, Did you do that?

[00:12:25] Because that is completely wrong. Like you should almost never use chat GPT for cooking recipes. I'm speaking from personal experience. It can lead you down a really bad path. I'm being absolutely serious. I'm being able to talk video and think that you're going to be a professional.

[00:12:46] Like if you want to learn how to cook or make bread or do any of those things, like chat GPT can be a resource. But you should also try to find resources from like well-known or even not well-known, but just well-experienced people that have gone through the tribulations of learning whatever you're trying to do.

[00:13:11] I believe the consensus is as long as you're using it as a way to enhance your work, it's fine. Right. Like in terms of like programming, I would treat AI as a really dumb coworker. So yeah, but I mean, I can understand what you're saying of kind of treating it as something that you kind of need a second guess, like someone who's not that great at their job. Right. And so yeah, I can kind of understand that.

[00:13:41] But at the same time, yet or again, I don't think you can completely discount it from being a resource for you. I don't mind using like chat GPT or whatever AI is out there to like rubber duck, but it's a whole different thing when you're like, hey, can you write me a thing? And it does all the producing, right?

[00:14:09] So I think, I think the act of producing something and calling it yours is valuable and the medium in which you produce it should not be with AI. Like I would, I would not, I would never ask a coworker. Well, okay, this could be taken with a grain of salt, but I would never ask my coworker to do all my work for me. But I may ask them, hey, here's where I've gotten so far. Do you see any downsides of doing it this way?

[00:14:38] Or is there anything that you think I should change about this? Right. And kind of use it as a verification of your plan. That makes sense as well. I think another place that where it really like shines is like when it's something outside of your domain. Like, you know, a lot of times there's something that will happen in like a continuous integration pipeline. And I'm like, I have no idea what's going on here. What does this mean?

[00:15:07] And so, you know, something like AI is very helpful in that of, oh, you know, this is where the smoke might be coming from. Yeah. I just, there's a lot of use cases for AI that I think are acceptable. But we, or at least I try to keep in mind that AI was trained off of other people's work. And they're not paid for that work either. And they're not being paid to have their work used. Right.

[00:15:36] So, you know, that's a whole conversation in itself. So, it's like, I don't want my work to be off someone else's hard work that they're not being fairly compensated for. So, there's the like moral aspect. I would never hesitate to ask someone for free. Like, hey, what do you think about my code? Do you think anything's wrong? Right. Outright stealing is still wrong. Right.

[00:16:04] Even if it's indirect stealing. Right. Like, like. Yeah. No. I get where you're coming from with this. I think it's the denial of reality. Like, you want to live in the perfect, not everyone's at the point. Like, you want to live in the perfect world so much that that world's kind of gone by the wayside.

[00:16:27] Like, unfortunately, I don't necessarily like that AI was trained on a whole bunch of data that they had no right to. But at the same time, what is done is done. And being kind of being like, oh, you know, I don't like the ethics around it. And therefore, I can't use it at all. I think there's a big danger in what's coming from our profession.

[00:16:51] Because I do think that it is going to start being a part of our everyday workload of these are things tool parts. It's just another tool to our trade. Not using it can leave you left out. I agree. I think to some extent, I agree. But I also don't want to get in the mindset of, okay, well, this is our life now, right?

[00:17:14] Because, well, in this scenario, it's probably true that it will be a part of our profession, whether we like it or not. But I think it's also dangerous to be like, well, this is just the way the world is going. And therefore, we should just accept it, right? If, you know, stealing and looting places became a norm, I don't think we should all just cave and be like, all right, well, I need groceries. I'm going to go break a window and get what I need without paying for it, right?

[00:17:44] So I think maintaining that balance of, like, I don't know, acceptable evils, if it should be acceptable at all, you know, is important. Because, like, right now, I just don't have desires. I don't know if I have... It has a basis and essentially effect.

[00:18:11] So it's, you know, you kind of have to find, or is there a way that we can kind of fight the fact that, you know, things are being stolen? Or how do you approach that portion? I don't know if I know how to, like, solve the situation, but... You can't solve the world's problems? No, I try, but, you know, I'm busy.

[00:18:38] Right now, I see it as I still have the choice to choose, right? Like, AI, while it's being used in a lot of places, it's not being forced upon me to use it. And therefore, I still have that choice to not use it and instead, you know, grow skills or whatever. And if I want to use it, I can. It's there.

[00:19:02] If I don't, you know, I will pay the cost of not using it at some point. But in the future, I don't know how we're going to justify these companies trading off of free internet data and justifying any copyright issues around it, right? Because, you know, like, before LLMs and AI ever showed up, people were against large corporations.

[00:19:32] Like, everyone almost hated Walmart at some point. But then you're suddenly okay with OpenAI or whatever company. I mean, realistically, you're talking about a pattern that's existed since the industrial age. Yeah.

[00:19:49] Like, realistically, taking, you know, before AI and when things really started, like, you had natural resources being taken right and left and continue on. Yeah. Again, it's not a problem that is... So, you know, how do you draw that line of, okay, so if you're kind of going along the lines of, I'm going to stay away from AI because of this,

[00:20:17] it kind of starts cascading back of you have to look at all these different things. Well, I'm also kind of a hippie. Like, I grow vegetables. I own chickens. Like, you can't say I'm exactly your normal everyday person. But, you know, I do, you know, I have a car, right?

[00:20:40] For the majority of it, the things that I use, like, someone was paid to build that car that I drive, right? Someone was paid for the desk I use. Someone was paid to even set up the garden area that I have. Paid by me, but still. Right?

[00:21:00] So, but I think the difference here is all these AI companies, as far as I know, don't pay for all the data they're training people on, right? And then they're charging people for the use of data that they got for free. And then they get upset that, oh, how dare you? You use my model without my permission. It's like, well, you use someone else's work without permission. So, how do you draw that line?

[00:21:29] So, I mean, even the company is having issues trying to solve this problem. Yeah, and I just think that there is a balance. You know, like, I don't know if everyone that worked on my car to manufacture it was paid fairly. So, there's some of that, too, right? Like, they might not have been paid. They probably weren't paid fairly. At some point, though, it's like we still have to live our lives.

[00:21:56] I just try to minimize how much bad, bad, loosely used, I participated. Does that limitation give you any pause or fear for your career potential in the future? No, it doesn't. And I can explain why. At one point, I was a kid that was learning to read, learning to do math, right?

[00:22:26] And I was able to learn those skills pretty okay. Like, I think I'm a decent reader, writer, mather. And at one point, I didn't know how to program. And I never went to college for it. And I learned it all on my own. And I don't have any formal training aside from being on the job, right? And I learned it, right?

[00:22:56] Everything in my life up until now, I have been able to learn when I needed to learn it. And I am fairly confident, given enough demand and need for a skill, I can figure out how to do anything. So, right now, it's like when I look at LLMs and using them, it's like you're just writing sentences.

[00:23:25] You just have to know how to be manipulative enough to get what you want, right? Exactly. And fortunately, I learned how to write sentences when I was like seven or something. Yeah, yes, but I also think it's a skill set. I think getting your point across to the AI of what you're trying to get out of it, you are developing a certain skill set.

[00:23:49] I think that if you, I don't think AI will ever be good enough to take my job or for me to use if I have to write it and if I have to talk to it in such a specific way. No, but I think you're also looking at it at today's technology. Right. And when things evolve, I will evolve. It's just like how we grew up as kids, or at least I did.

[00:24:18] We were writing more with pencil and paper than any, probably any kid but mine do, right? But I mean, realistically, like I think we both grew up at a time where, you know, we didn't have a phone in our pocket. Right. We didn't have that connection constantly. Right.

[00:24:37] And so the way I kind of looked at technology is some of the things that I never would have thought possible when, you know, I was a kid have kind of become more possible. Yeah. And so I've gotten to a point where I'm not going to discount technology can do something. Technology can absolutely do something.

[00:25:01] However, I think we should always retain our right to what technologies we use. Right. Like, I don't think I would enjoy a society where you are forced in your daily life, you are forced to use something. Like some things can't be helped. Like tap to pay. I'll be tapping my phone all day long to not have to worry about my credit card. Right.

[00:25:31] Like some things I will just openly accept technology or not. Right. But for things that you just don't morally agree with, I think you should have the option to choose to not participate in that. And maybe that means one day I'm not a programmer using AI. Maybe I'm just one of those old programmers that still types on the keyboard. I mean, like, yes, there's always trade-offs. There's always trade-offs in those decisions.

[00:25:58] I'm not saying that there's no trade-offs. There's always trade-offs. We're software engineers. There's always trade-offs. But I still think it's important as a society to retain those decisions. Yeah. I completely agree. I don't think that AI is close to replacing anyone. Well, not when it comes to programming. Yeah. I mean...

[00:26:24] But I also can see pathways where I do see it going... Or it is going to get better and more efficient. Yeah. And I think we both have experienced enough of corporations to know, you know, they're going to do what's best for them. Yeah. And, you know, eliminating a programmer is what's best for them.

[00:26:47] I mean, I guess the best hope I can have for AI, them using all this data that they didn't pay for, I hope that one day the product they build, it will turn around and help the people that they stole from.

[00:27:06] If some writer or artist is, like, not making it in the time they need to make it and they're struggling on their day-to-day, I hope one day AI helps them to make the money that they need, right? Like, in general, maybe I'm too frou-frou for this world, but you should put stuff out in the world that makes the world better, not worse. That's not the life we live today. Like, people put stuff out in the world to make money.

[00:27:36] I completely agree. My issue is more that, unfortunately, I think that AI is going to make the wrong people more money than it should be. And that's always been a problem, isn't it? As much as I would hope that eventually it helps a writer whose work was used to make AI or to program it.

[00:28:02] I realize that the company who made that AI is going to make a lot more money than that writer ever will. Yeah, that is absolutely true. If we could go back and we were starting an AI company, right? Let's say we were going to build our own model, our own LLM. I mean, AI is already very expensive to run. Like, I don't even want to think about the environmental cost of AI.

[00:28:27] But it's like, if you have such money to run AI, I feel like you should have such money to be like, let's go and talk to a bunch of the best experts in the world that have been in their industry for a long time. Let them know what we want to do. We want to build the greatest AI model there, right?

[00:28:56] And you collaborate with them. I think you would still, you probably, it'd be very expensive, but I still think you would end up with a better product of a model with that collaboration than without. Because right now, what do we have? We have AI trained on the internet. You know how often the internet is wrong? I can't agree with you more that training on the internet is probably not the greatest idea considering what's there.

[00:29:26] But at the same time, you know, a company is a company. And they're going to use a free resource over a paid one. Yeah, the whole flaw in the system is that money matters. With that, you're trying to point out a problem with a large portion of society. Yeah. But even in a smaller, you know, space like your career, like if you want to advance in your career,

[00:29:54] like at some point you have to play the game that is necessary to advance. Whether that is, you know, I don't know, changing jobs a lot or doing stuff, right? So, unfortunately, we don't always have a choice, you know.

[00:30:13] But I still think that, you know, they could have done better with the collaboration versus just the outright thing, right? Well, realistically, if you're looking at AI, you also kind of have to look at social media.

[00:30:33] Because social media, you know, they capitalized on not only people's work, but people as a person. Yeah. Like they sold privacy, essentially. I think social media in a lot of ways is a problem. But I mean, it's not to derail the conversation, but it's to say, you know, over and over again,

[00:31:00] we've seen technology have this, you know, I want to have this altruistic mindset of I'm pushing the envelope, I'm developing something new. And when the rubber hits the road, they're like, yeah, I developed something new, but there's externalities and costs associated with it that I'm just going to kind of ignore.

[00:31:26] And it's someone else's problem, even though I was the one creating the problem. It's really hard to navigate life and not take part in some or all of that, right? Like you can avoid social media or not produce things on the internet if you don't want your, you know, creation to be stolen.

[00:31:50] You know, and maybe that was some of the justification of why it was okay to train LLM models on internet data. It's like, well, you put it out there. Is it any different for me to just go read what you did or, you know, examine the code you produce and like copy it myself and feed my files into my LLM, right? Like there, there's a huge gray area.

[00:32:19] Like, is it okay? Because I put that minute effort, I almost knocked down my water ball, minute effort into a file and therefore my LLM is legit. Or it's really hard to know, like, what is stealing, right? Like, it used to be a little bit more clear when you went to a bookstore, you bought a book and the author got some portion of what you paid for.

[00:32:45] But it's much harder on the internet where I think most websites don't get paid anything to be there. Right. I think when it comes to, again, when it comes to technology, it's, there's always the push to, you know, push the envelope a little bit further to do better. But we all have to get paid. And so it's a case of, you know, yeah, you may be putting this out there,

[00:33:15] but it's really going to be used in a way that you may not want. Yeah. But it's the cost of sharing. Yeah. Unfortunately, that's the case. Like, even aside from, like, AI stealing things, like people steal things on the internet all the time. Yeah. And things that you put out there are used for better or for worse. I mean, that's how memes exist, right?

[00:33:44] Like, like famous people get memed all the time and they probably get photoshopped in ways they don't really care for millions of people to see. But I don't know. I would like to think that they knew that this is a possibility when they became someone that decided to put themselves out there. Now, I personally don't try to meme people. So I have no idea.

[00:34:12] I try to treat people on the internet as, like, regular people. I mean, when it comes to something like that, you know, some people who had no intention of becoming a meme have now made careers off of being a meme. Yeah. As much as there's dark sides to things, there's light sides.

[00:34:30] And I think that's kind of the theme that we've kind of struck upon is as much as we, you know, don't want certain aspects of things, it's kind of cost abusing it.

[00:35:12] Yeah. It's just like, oh, wait.

[00:35:41] I didn't think of that portion. you know we both have worked for large companies so in some to some extent we feed into that whole thing and like i'm sure you'll agree like i don't necessarily know how my work is going to be used for the bad at my company right like like we're we're just

[00:36:08] everyday people we get jira tickets we get emails we get requests and we do it and for the bad that i know about it's like i try to stop right you can or you can still have that ethical bound because you know i know that there's certain times where i've been asked or been told you know oh we

[00:36:33] need to do this and i'm like no that doesn't you know that's not okay with me whether it's the case of you know in the past being a trying to do like some sort of advertisement that went a little too far or you know thinking out oh we want to do this new experience but we can't make it accessible like there's a number of situations that rise in everyday programming but it's like oh that could be an

[00:37:00] ethical question i think you can apply to your own moral standard i want to think that people that people have boundaries on what they will and will not do for their company right now i'm not saying we all have the same boundaries but somewhere in you you have to have this thing of that's gone too far and maybe you need your job to the point where you know you can't quit on the spot to just be like

[00:37:27] i'm not doing this but i think at the very least you should raise a flag and be like hey are is this something that is right to do right like i mean i'm not saying that fixes it all but you know i think it's good to have some moral backbone in yourself to where you're gonna start questioning if because maybe no one thought about it and that has happened like i think that a good portion of that comes from

[00:37:54] having different people in the room when the decisions are made right i might approach something from a development side and is it possible from an engineering perspective and you know i might bring my moral compass to those decisions as well i think you need product and ux and all of our partners to say

[00:38:18] what their back or bring their full background to something as well and so that way when it comes to something like ai where you're training it on someone else's work someone who writes might be in that room and says wait you realize we're using someone else's work here without paying how's that it yeah unfortunately that's a hard boundary to draw you know because you have to be ultra aware

[00:38:47] of the source of the data that your company is allowing you to use which for me i don't always know where the company data came from right like it's just it's just some endpoint that they told me to hit and they'll give me what i need and i don't know where any of that came from um but the other factor is how money driven are the people in the room right like some people will do anything for that

[00:39:16] next promotion or for that you know big payoff which then you know they might not i mean we both know people who would put away their morals right yeah it's just not the kind of person i choose to be you know but unfortunately that just exists in the real world like you know as as much as i like taking a

[00:39:39] moral stand like at some point you just need to function in society like i can choose you know for a large part where my food comes from but you know if i need to do my job and i need and my job forces me to use ai in some way do i quit my job because i'm against it that's a little iffy because i kind of need it

[00:40:08] right so i mean that's the thing that i kind of weighing myself is at first i was really like oh you don't i don't want to use this i don't want to use something that's going to essentially cheapen my work right and then the more i looked at it i was like okay but then there's a possibility that my performance actually gets reduced right because others are using it and i'm not yeah i think in the

[00:40:37] and like i cannot say it's wrong or right to use ai or avoid ai right it's like it's a it's a choice that individuals have to make you know but what i would say is or what i would encourage is to make sure that you're maintaining your own skill set right at the end of the day ai and l lens are just another

[00:41:04] product and products come and go thing that can replace your skill set and if you allow it to diminish your skill set you're just right so it's like there's no judgment for me on whether someone uses an llm in their daily job or not but it's like just make sure that you are using your skills in

[00:41:27] some sort of capacity so that if they all go away at some point you actually need your skills you still actually have them keep maintain your education maintain all the effort that you put into learning your skills so that if one day you happen to need it without the aid of an llm you can still do that right

[00:41:52] yeah i always struggle with how can i say i enjoy programming if i don't want to do programming like i love programming it is not a chore for me to write code at all i can do it all no it's there for me it's not a chore it's not a chore i enjoy the program but there's also a limitation for me it's not

[00:42:20] my the thing i love the most in the world no um and so that's always kind of been a little bit of a struggle for me is because it's like if you know i have a personal project where i can get value out of it then i'll do a personal project but generally i'm you know

[00:42:44] coding is my profession yeah i think that's where we kind of differ because it's my profession and my hobby and i always tell myself like programming is my hobby first and the day i stop enjoying it is the day i should quit my job so that i can enjoy it again

[00:43:05] that hasn't happened yet because i like it i love it like it gets a little bit dangerous when it comes to like ai kind of being like i don't think that looking at it as a kind of blocker to like enjoying it is

[00:43:28] i think the part where it becomes a blocker to enjoying it is the delegation for ai to write your code right it's like if you enjoy being a programmer why do you not enjoy producing your code why don't you enjoy programming why do you want an ad now there's different levels right because i know that there's some things where it's like they are only using ai for that single line completion and you're still

[00:43:55] largely dictating how the code is being written so i'm only really talking about the ai doing 90 to 100 of the work it's like there was no thinking involved yeah right as long as it's an accent again i think it's completely a different story yeah i don't know i think the takeaway is you have to make your

[00:44:19] own choices and to make sure that you're taking care of yourself and take care of yourself and your skills first before you get too carried away to where you find yourself one day not able to do anything without an l1 yeah right i'm gonna agree more yeah all right cool okay see you next time bye