For those not aware, Shift Left[1] is (at this point) an old term that was coined for a specific use case, but now refers to a general concept. The concept is that, if you do needed things earlier in a product cycle, it will end up reducing your expense and time in the long run, even if it seems like it's taking longer for you to "get somewhere" earlier on. I think this[2] article is a good no-nonsense explainer for "Why Shift Left?".
[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
Sounds like the exact thing we never did at my previous employer. Just do the bare minimum to get it out the door and then fix it later.
That's an orthogonal concept. The idea of "shift left" is that you have work that needs to be done regardless (it's not an analysis of "bare minimum"), but tends strongly to be serialized for organizational reasons. And you find ways to parallelize the tasks.
The classic example is driver development. No one, even today, sits down and writes e.g. a Linux driver until after first silicon has reached the software developers. Early software is all test stuff, and tends to be written to hardware specs and doesn't understand the way it'll be integrated. So inevitably your driver team can't deliver on time because they need to work around nonsense.
And it feeds back. The nonsense currently being discovered by the driver folks doesn't get back to the hardware team in time to fix it, because they're already taping out the next version. So nonsense persists in version after version.
Shift left is about trying to fix that by bringing the later stages of development on board earlier.
Shift Left is about doing more work earlier in a process for efficiency -- it sounds like you at your previous employer were shifting right instead?
To me that makes sense, you'd want stuff out the door asap and see if it can work at all, not waste time on unproven ideas.
[flagged]
No evidence most of the activities actually save money with modern ways of delivering software (or even ancient ways of delivering software; I looked back and the IBM study showing increasing costs for finding bugs later in the pipeline was actually made up data!)
To be more specific, let's say I can write an e2e test on an actual pre-prod environment, or I can invest much development and ongoing maintenance to develop stub responses so that the test can run before submit in a partial system. How much is "shifting left" worth versus investing in speeding up the deployment pipeline and fast flag rollout and monitoring?
Nobody I've worked with can ever quantify the ROI for elaborate take test environments, but somebody made an okr so there you go. Far be it we follow actual research done on modern software... http://dora.dev
Are you referring to the IBM Systems Science claims (likely apocryphal) in the Pressman paper, or Barry Boehm's figure in "Software Engineering" 1976 paper which did include some IBM sourcing (literally drawn on) but was primarily based on survey data from within TRW?
It baffles me that anyone would continue to promulgate the Pressman numbers (which claim ~exponential growth in cost) based on... it's not entirely clear what data, as opposed to Boehm's paper which only claims a linear relative cost increase, but is far more credible.
Last rant: everybody is testing in production, but only some people are looking at the results. If you aren't then there's better ROI to be found than "shifting left" your e2e tests.
Sounds like a similar/parallel thought to the project management waterfall paradigm whereby the earlier you get things correct (left) the less costly it is in the long run or conversely if you have to go and re-do things later on (right) you’re in for a shock (either cost, time or quality).
Funny how correct is not associated with right. The normal association is reflected in he Haskell Either datatype for describing computations that either run into some error (Left error) or run successfully producing a value (Right value).
I’ve come to the conclusion that Kent Beck got just about everything right with Extreme Programming. There’s just not a formal training industry out there pushing it so many more people are selling and invested in other approaches with a lot of warts.
Except it doesn’t work. What does work is domain experience. You get this by iterating quickly.
Thanks for the explanation.
I have always called this "front loading" and it's a concept that's been around for decades. Front loading almost always reduces development time and increases quality, but to many devs, it feels like time-wasting.
> The concept is that, if you do needed things earlier in a product cycle, it will end up reducing your expense and time in the long run, even if it seems like it's taking longer for you to "get somewhere" earlier on.
Isn't ignoring the early steps that could save time later also known as false economy?
Ignoring earlier steps is the basis of Agile.
It's only a false economy if they are the correct steps. If it turns out that they are wrong, it's a very real one.
I had management who were so enthused about "shift left" that we shifted far left and broke the shifter, or perhaps, mis-shifted. Now we spend too much time developing test plan and testing and arguing about PRD that we actually deliver more slowly than competitor by a lot.
So “shift left” is roughly equivalent to “tests first” or “TDD”?
Sounds like the bottleneck concept from Goldratt’s The Goal
”a stitch in time saves nine”
”prevention is better than cure”
[ante-] ”closing the stable door after the horse has bolted”
Do the hard things first?
What helped me finally remember "left of what, exactly" is to imagine a timeline, left to right. Shifting left is moving things earlier in that timeline.
In my org [1], "shift left" means developers do more work sooner.
So before we clearly clarify product requirements, we start the build early with assumptions that can change. Sometimes it works, sometimes it does not, which just means we end up net neutral.
But an executive somewhere up the management chain can claim more productivity. Lame.
[1] I work at a bank.
I think that's a myopic view. Getting something anything in the hands of your potential users that's even vaguely in the ball-park of a solution shaped thing gives you extremely valuable information both on what is actually needed and, to me more importantly, what you don't have to build at all.
I consider it a success when an entire line of work is scrapped because after initial testing the users say they wouldn't use it. Depending on the project scope that could be 6-7 figures of dev time not wasted right there.
Aka A stitch in time saves nine.
We seem to be hell bent on ignoring age old common sense repeatedly in our work while simultaneously inventing new names for them. Wonder why? Money? Survival? Personal ambition?
These concepts are not worth the paper you wipe your backside with unless you have a manager and a team that cares.
I'm in chip design and our VP's and C level execs have been saying "shift left" for the last year. Every time someone in the Q&A asks "What does shift left mean?" No one talks this way except for executives. We just joke "The left shift key isn't working on my keyboard" or some other nonsense.
In software shifting left often also means putting more tasks into the hands of the person with the most context, aka, moving deployment from ops to developers.
The main benefits you get from this is reduced context switching, increased information retention, increased ownership.
But it has to go with tooling, and process that enables that. If you have a really long deployment process where devs can get distracted then they will lose context switching between tasks. If you make every Dev a one man army that has to do everything on their own you won't be able to ship anything and your infra will be a disjointed mess.
They key thing is reducing toil, and increasing decision making power within a single team/person.
From the sound of the article they might be just squishing two teams together? What's the advancement that made the two steps be able to happen at the same time?
As soon as I saw “shift left” I knew I wanted to double down.
This kind of joke might multiply…
Bingo. I started reading the article and found it to be packed with jargon, buzz words, and the language of breathless prophecy and faddish hype. I couldn't hear the point of the article over the blasting sirens of my BS detectors, so I just stopped reading. Seriously, is there anything in the article worth the effort of slogging through all of that business-school/project-management shorthand/nonsense?
Way to manage up.
> Optimization strategies have shifted from simple power, performance, and area (PPA) metrics to system-level metrics, such as performance per watt. “If you go back into the 1990s, 2000s, the road map was very clear,”
Tell me you work for Intel without telling me you work for Intel.
> says Chris Auth, director of advanced technology programs at Intel Foundry.
Yeah that’s what I thought. The breathlessness of Intel figuring out things that everyone else figured out twenty years ago doesn’t bode well for their future recovery. They will continue to be the laughing stock of the industry if they can’t find more self reflection than this.
Whether this is their public facing or internal philosophy hardly matters. Over this sort of time frame most companies come to believe their own PR.
Intel has had a few bad years, but frankly I feel like they could fall a lot lower. They aren't as down bad as AMD was during the Bulldozer years, or Apple during the PowerPC years, or even Samsung's early Exynos chipsets. The absolute worst thing they've done in the past 5 years was fab on TSMC silicon, which half the industry is guilty of at this point.
You can absolutely shoot your feet off trying to modernize too quickly. Intel will be the laughingstock if 18A never makes it to market and their CPU designs start losing in earnest to their competitors. But right now, in a relative sense, Intel isn't even down for the count.
Imagine the world of software if your editor, compiler, virtual machine, etc. each cost a million dollars a year per programmer.
This is reality in VLSI CAD.
Now you understand why everything in hardware engineering is stupidly dysfunctional.
We don't need "shift left". We need tools that don't cost a megabuck.
Then stop paying Cadence and Synopsys to screw you over. Fund and support open source tools, demand open PDKs.
Is OpenROAD on the right path? https://today.ucsd.edu/story/open-source-semiconductor-chip-...
The shift right parts of the article are more interesting...
Can we please stop naming things after directions? I don't want to shift left/right/up/down and my data center has no north/south/east/west. Just say what you actually want to say without obfuscating.
I guarantee your data center has a north, south, east and west. I'm willing to bet that the north, south, east and west of your data center are the same as the north, south, east and west of the office I'm sitting in!
Chip design is full of ridiculous terms that don't mean what they say. You kinda just have to go with it, or it will drive you mad.
Yeah, should be "shift inline-start", for better i18n.