alpha_squared 2 days ago

I mentor a few recent grads and folks who change careers into tech, so I'll share here what I've told them for years.

Writing code is the easiest part of software engineering. You're hired for your perceived ability to solve problems the employer needs solved. Many times you'll solve it with code. Sometimes you'll solve it with process. In every instance, you're expected to solve it using your learned experience and ability to critically think through the constraints (time, money, etc.).

If you can't be bothered to deal with the easiest part of your job, writing some code, how do you expect to be trusted with the hard stuff? I'm not saying don't use AI tooling, but I am saying don't cheat yourself by becoming dependent on it.

  • NoPicklez 2 days ago

    True, to a degree but I think this reflects different levels of responsibility and experience as you grow in your career.

    Coding is the primary part of the job particularly when you're a graduate, to solve problems using code and you need the skills to be able to code and solve basic problems using code. If you don't know how to code then you will struggle.

    As you build that experience and grow in your career you begin to learn and have to capacity to think bigger and start to incorporating aspects of time, money, process and other business factors that you don't really have the ability to do if you haven't spent time in the arena.

    I work with a lot of graduates in cyber security and many of them have very "technical thinking" but don't think about business tradeoffs of time and money when looking at security controls. That is primarily because they're still trying to harness their technical skills, which is what they will do early in their career.

  • CharlesW 2 days ago

    > If you can't be bothered to deal with the easiest part of your job, writing some code, how do you expect to be trusted with the hard stuff?

    If writing code is the easiest and least impactful thing you do as a software engineer, why does it matter if you use AI to assist with that part? Or to come at it from another angle, why is using Stack Overflow/Google in the hunt for answers and examples good, but using AI models is not good?

    • kolinko a day ago

      LLMs, when used by an inexperienced developer will produce code that has a ton of fluff and is difficult to read/change. The worry is also that if devs don’t learn to write simple code, they will plateau very fast because they will not be able to continue beyond what llms provide - this was also an issue with StackOverflow, but to a lesser degree.

      Having said that, I’m absolutely pro using LLMs - it’s just a matter of using them properly for either education, or code writing. For experienced devs they save a ton of work writing boilerplate code and learning new libraries, for inexperienced they help overcome initial hurdles with grammar - and can teach how things work, if junior asks.

      As a dev with >20yrs experience - vanilla ChatGPT produces code that has 80% too much verbosity, but with good prompts uou can cut down on that, and if you refactor what it wrote it is very useful - especially when dealing with unusual feameworks/languages. It’s also great at explaining concepts and “proper ways” to write code in new languages. It is very poor at designing novel architectures though.

    • hattmall a day ago

      I wouldn't say using AI to write code is bad at all. The problem is that it usually doesn't work in complex situations. If you already know what you want to do, feed it very specific instructions covering most of the bases then it can save you some time for sure. The main thing is you really actually need to know what it's doing to be able to reasonably utilize the output. If you don't actually know the capabilities of the code and you are stringing together generated output then that's bad. Just like it would be bad to copy large sections of code from stackoverflow. It's also probably not going to work.

    • jpollock 2 days ago

      There is a lot of knowledge that devs gain by seeing how their code evolves over time.

      Things like "This encourages people to add features here, here and here".

      Without that experience, the code becomes inscrutable very quickly.

    • alpha_squared 2 days ago

      > If writing code is the easiest and least impactful thing you do as a software engineer...

      Not sure if I've hit a nerve, but I never said it was the least impactful. Code is the distillation of all the work that precedes it to accomplish a task. It's the easiest part, but it's essential to the final solution in most circumstances.

  • svachalek 2 days ago

    Couldn't have said it better. Unfortunately people who should know better like Zuckerberg are perpetuating the idea that coding is the whole job.

    • light_triad 2 days ago

      Partly because coding has become so hyper specialised in large companies, partly because folks who sell AI have an incentive to exaggerate to sell more.

      Programming is over, jobs will disappear and we'll all be on UBI while AI turns into terminator. Meanwhile designing software systems still requires lots of intrinsic motivation and persistence

  • jc_811 a day ago

    Absolutely agree. I went through this as well with friends who did bootcamps during the covid craze. After 3 months they understood how to code and the syntax, thinking that’s all they needed to be a “full stack developer”

    When they weren’t getting any jobs I tried to explain that the syntax of coding is the easy part. The hard part is everything else.

  • globular-toast 2 days ago

    There's also the "big picture" part of software engineering, that is the architecture and design of a piece of software. It's all done in the head (and maybe on paper). Once my fingers hit the keyboard the code pretty much writes itself because the hard part has already been done.

    • asdff 2 days ago

      I wish we got more affordances here with this side of the experience. The nicest thing recently I think are the idea of a directed acyclic graph that describes the system. But even these get cumbersome when many parts are involved and these are only presented in 2d and fully appreciated when they fit within your monitor. While it can be nice if these sorts of things can be visualized in 3d in virtual reality perhaps, I think that in 3d digital environments that often look structurally similar it is easy to lose a sense of subconcious wayfinding that we inherently have when we observe real things in real life.

      In other words, a messy dag is hard to interpret, but your messy desk full of papers you can subconciously place where everything might be, because it exists in the 3d space in real life where all your senses are active on it as they are designed to be. not in a simulated manner but the real thing that can never be fully modeled.

      Companies used to model things in 3d via clay or balsa wood to help visualize ideas in this manner. Eventually people like architects started using autocad instead of balsawood. But one wonders if a certain dimension of understanding of a project was lost when we went from something we could walk around and look at unencumbered, into a digital abstraction of some real thing.

      • williamcotton a day ago

        I agree, I find it way easier to orient myself with a desk full of printouts and highlighters than 45 Word documents open somewhere on a computer screen.

        • globular-toast 7 hours ago

          Yeah, or a bunch of textbooks and papers etc. I'll often have a book or two on my desk and many on the shelf near me. I find I can thumb to pages of interest from memory throughout my whole bookshelf. I've said it many times: the book is still the number one technology for reference. I've often thought it's about the practical benefits of having essential limitless space for these materials, but there is definitely also something about engaging our real spacial awareness capabilities for information retrieval.

kccqzy 2 days ago

I think this really isn't about AI, but rather about the good self discipline as well as the willingness and perseverance to explore deeply.

I have seen people who learned to code before AI, and yet because they don't bother to ask themselves another layer of "why" they continue to have a superficial understanding of everything. And I have no doubt that the truly excellent programmer will understand the whole system even if they learned coding after AI.

> I had the habit of not copying code from YouTube tutorials

This is what makes all the difference. If you know you should not blindly copy code from YouTube tutorials, you also know you should not copy code from AI output, and if you were to be born in my generation you would also know you should not copy code blindly from a textbook.

Kudos to the author though—I think this person has the right personality to become a good programmer regardless of whether they wrote their first line of code before or after the age of LLMs.

  • brailsafe a day ago

    Ironically, as a more experienced programmer, I regret not manually writing every line as I read a book or watched a tutorial, and deliberately do that now. I compare it to thinking I can learn something by listening to a podcast; I can learn about something from a podcast, but to learn it at even a shallow level, I have to have at least more the most shallow interaction with it. Write the code, do it the hard way, then once you've done that, use whatever you want to help you do so more productively. This has coloured my perspective on many things, and has led to me distancing myself from "information" media, because it's often fake learning that costs you the time you could spend actually learning. I don't know where AI sits yet in relation to this, but my sense is that in moments where I'm trying to understand something at a deeper level, it's more fruitful to think about it as hard as possible first for at least a while before looking for the answer. It's not productive in an output per hour sense, but it requires a different intensity of energy that feels like better personal growth.

  • layman51 2 days ago

    Maybe I’m missing the nuance here but I don’t understand the arguments against “blindly copying” in the parent post. I agree it is probably not good to give up like the sibling post is saying, but at some stages you kind of do need to copy code blindly, no?

    You shouldn’t copy if you’re a student. But if someone is a working Salesforce Administrator trying to learn Apex as a first programming language, I imagine a lot of the introductory code you will see around Apex Triggers will make no sense but you’ll have to blindly copy it if you want to follow along.

    Some learners will get frustrated with that teaching approach because they’ll be very curious about the syntax. I imagine at that point they’ll just consult other resources to get their questions answered.

    • jasonjmcghee a day ago

      Another distinction folks are making in other threads in this post is - in the situation where you are copying code, not using Ctrl+c and Ctrl+v but instead looking at the code and typing it out.

      The thinking goes, you'll learn much more quickly if you go through the motions of typing it out.

  • globular-toast 2 days ago

    Yes, and the persistence to figure things out when they're not working the way you think they should. So many people just go and ask for help the moment things aren't working and if they still can't get an answer they just give up. It doesn't matter if you're coming to me or ChatGPT, if I just tell you what's wrong you never learn.

mikeocool 2 days ago

One key thing that I found when learning to code mostly from actual books and online tutorials (well before AI), was to always re-type the code examples by hand instead of just reading them or copy-pasting them. If just reading the code is always miss a bunch of details, that id see when retyping.

I tend to do the same thing when using AI to explore something at edge of my knowledge, where I don’t know exactly what I just asked the AI to code. I ask it how to solve a general problem of the class I’m trying to solve, and the retype that code as I’m fitting it into my specific use case. I find that helps me much better understand what the AI generated code is doing, which comes in handy when it doesn’t work as described or goes wrong.

xrd 2 days ago

Hallucinating has always been of my favorite things to do, and I don't need an LLM to help with it.

rock_artist 2 days ago

It’s exactly like learning math before or after calculators were tiny and portable.

My son do basic calculations all the time. I know of some school aged kids doing amazing things. As long as you use technology to enhance and enrich you it’s good. But concise is the key.

You can try learning guitar only from YouTube. It might work, but you might also position your hand wrong and injure it. So balance and additional guidance are important.

commandlinefan 2 days ago

I'm glad google didn't exist when I learned to code.

  • abraae 2 days ago

    When I started at Oracle way back in the day, there was no Google.

    I was charged with building a DSL so that users could specify their own business logic and tax rules.

    I spent 5 x 16 hour days in a flow state coding in C and emerged with a parser that worked remarkably well, but was hideously slow.

    After sharing the results with colleagues, one of them told me about lex and yacc. I rewrote it and it went from thousands of lines of code to a couple of configuration files and ran probably 1000x times as fast.

    Of course if search engines had been around then I could have skipped all those hard steps and that project would have been completed more quickly.

    But the skills and confidence I got from first doing it the hard way really helped in my career.

    • tonyedgecombe a day ago

      I lean't about lex and yacc from the O'Reilly shelf in my local bookshop. The information was available but you had to seek it out.

    • jessekv a day ago

      I wonder if whoever gave you that assignment also knew.

      Sometimes I see a junior dev on what could be considered a "pointless" deep dive, and I keep my mouth shut.

      It's the software equivalent of making your first kill.

gamedever 2 days ago

Is this different than "I'm glad calculators didn't exist when I learned math"?

I'm mixed. I did try to get gemini to figure out how to do some fancy TypeScript stuff. It provided solutions but failed to meet the constraints and couldn't get it to really get what I meant. It would say "Oh, sorry, here's one dealing with that" and then spit out code that either still had the same problem or ignored a previous constraint.

Anyway, more relevant, I did wonder if I wasn't trying enough on my own to figure out what I was trying to do. I failed either way haha.

  • genewitch a day ago

    Gemini is far and away my least favorite llm. So far I've had OK luck with windows copilot for a simple react site. The site was done in like 2 hours but I polished it up while the person I did it for was copyediting their whitepaper for the "site". I had 5 tabs in notepad++ and was saving it to a cifs share so I could test immediately. React is pretty snazzy but js with its decorations doesn't look quite right to me yet.

    I used deepseek-R1-qwen-distill-15B to do a nodejs discord bot. That was done incredibly fast. I had asked for a few days for a cellphone app but llama, openai (didn't test 4o or newer yet), deepseek didn't let me know that what I was asking was technically impossible - service workers without a backend somewhere so I could keep track of timers and fire notifications. Like a PWA.

    After asking copilot about it the other way around "can service workers URL be file:///?" Apparently not.

    All of the serviceworker nonsense LLM gave me the two-step - make change, revert change, repeat.

pier25 2 days ago

The problem is that people are using AI to avoid learning to code.

  • tonyedgecombe a day ago

    These are the people who will be most vulnerable to being replaced by AI.

j_bum 2 days ago

Great article. I share this sentiment and am glad I learned to code in grad school with only stack overflow as a spotty resource.

We can definitely date OP by his use of “rawdog”, the context of which is actually hilarious and made me laugh out loud.

assimpleaspossi 2 days ago

I'm glad I became a programmer before anyone became a coder.

Sincere6066 2 days ago

I wish AI didn't exist so I could be glad.

timetraveller26 2 days ago

What's going to be next, no-code and just endless a/b testing?

2OEH8eoCRo0 2 days ago

I'm glad I got my degree before AI came and cheapened them all.

johnea 2 days ago

Me Too! and I was a high schooler in the 1970s...

Since I'm close enough to end of career to be able to ignore the whole trend as a developer, I'm mostly dreading having to "talk" to some frikin bot everytime I need support on some product.

If you agree that "customer support" system hell is bad already, then you have to know that this is all on the verge of being much worse...

me: My payment keeps being rejected on a known good card...

bot: I'm sorry Dave, I'm afraid I can't do that...

  • triyambakam 2 days ago

    I don't see it as very different than humans who must follow a narrow script. And scaling support is a good thing. And then there's still prompt injection in lieu of social engineering

    • johnea 19 hours ago

      I think the difference is that I would advocate for making it better, not worse...

casey2 2 days ago

If you learn to program before learning to code then with AI you will never have to learn to code.

ein0p 2 days ago

30 years into this career I'm still learning to code, just different things than the ones I have started with. And without a shadow of doubt, AI is a godsend for me. You can have it code up a well known algorithm you already know in a different language, or for a different platform (e.g. GPU), or vectorized, or without dynamic memory allocation (for embedded) - limitless variation. If something is not clear in a language in unfamiliar syntax, just cut and paste and have it explain the idioms like you're five. Unfamiliar library which you aren't even going to need again? Just have it write a canonical usage example and go from there, and then ask for advice on how to do what you want to do. Take output as a starting point. Endless array of tools, filling the exact gaps most of us have. My son is a CS student, and I'm telling him the exact same thing as I just wrote - if you use this stuff _and_ your brain at the same time, it's a tremendous force multiplier and an absolutely unrivaled educational aid.

Waterluvian 2 days ago

I mentor a high school robotics team and they use copilot for Java coding and on occasion I’ve seen them write code that just kinda breaks the robot and then they’re totally lost for a while.

But I can assure you, the alternative is there being no coding team and maybe no robotics team at all.

I think that maybe I’d say: it’s good that AI isn’t good enough to never break the robot. That’s when they’re learning a lot about coding and debugging.

  • UncleEntity 2 days ago

    > ...write code that just kinda breaks the robot and then they’re totally lost for a while.

    Um, yeah, that's how I roll.

    Try something, it doesn't work, debug for a while and maybe I can get it to work?

    My favorite example was I was trying to get the python buffer protocol working for images in blender. I read the docs, wrote some code which didn't work, started debugging for a long while and, finally, after I went digging into python's repo I found out it wasn't even fully implemented. Needless to say I was less than happy they would let a half-baked feature into a release.

    I can say the current batch of 'thinking' AIs are pretty good at debugging once you get them pointed in the right direction.