I’ve been an independent consultant for a long time now. Over the last seventeen years I’ve worked for dozens of different clients. In that time it’s been interesting to watch how good practices have slowly permeated the industry. These days, when I start working with a new client there’s about a 50% chance that they will have some kind of Continuous Integration environment in place. Over the next couple of years that percentage will, no doubt, increase and CI will just become part of the standard software development toolkit. Those of you thinking “but it’s already part of the standard software development toolkit” should realise that not everyone is as leading edge as you are.
For example, before CI was on the scene, unit testing was the big new idea. Over the last ten years, the percentage of clients where I seen unit tests being used has gone from about 10% to pretty much 100%. For most software developers, the idea that you would develop any large project without unit tests seems ridiculous. But it wasn’t always like that.
Before that, it was source code control. When I first started out in my career I had a number of clients where I spent a lot of time persuading people of the benefits of source code control. At one large bank in the City of London I was charged with getting all of the development teams in one department to use source code control. It was probably SCCS or RCS – either of which is just barely better than nothing. One of the development team leaders was particularly hard to persuade. At one point he told me:
I understand exactly what source code control is for. But it solves a problem that my team just doesn’t have.
I didn’t really understand that. He had a team of three people. They all worked on the same codebase. How was it possible that source code control wouldn’t make their lives easier? Later I worked more closely with that team and came to understand their working techniques as I found a directory of tarballs with datestamps in their names.
Of course, this is all ancient history now. In these enlightened times we can laugh at stories like this because we all know how important source code control is.
But look at this job description which was posted on jobs.perl.org a couple of weeks ago. There are a number of things in this advert which worry me – “raw perl (no modules)” – but I think the thing that scared me most was where it says:
You must hate version control systems, we won’t be using any.
I’m not sure what’s the most surprising thing here – the fact that there are people who still think like this or the fact that they admit it in a job advert as they think it will encourage people to apply for their job.
All in all I don’t think that Holophrastic Enterprises sounds like the kind of place that I’d like to work. You might disagree. You might think that cutting through all this “best practices” nonsense and just getting on with coding sounds like your perfect job.
If you apply, please let us know how you get on.
Me, I’ll be sticking with version control.
 It can’t be coincidence that this was also the team leader who complained the most when the infrastucture and deployment team I worked in took the decision to remove developer access to the production servers.
Also published on Medium.
I am surprised you have such good experience. I keep bumping into companies and teams where they don’t do automated testing and of course CI is irrelevant in that case. Even version control is new to some companies I visit.
Regarding the specific case of HE, I thought this was a joke.
There may, of course, be some selectivity involved in the clients that I work with 🙂
I’d love to hear a talk about what it’s like trying to convince people to pick up version control and testing. If not a talk, then I’ll buy you a beer if you tell the story.
Surely that ad was a spoof.
Great read, as usual. I honestly agree but don’t (all at the same time), and even though most people know I love to be contrary I can think of a few very real situation where code could be created confidently without testing and vcs’.
I myself swear by versioning and writing tests. FWIW, the source code at the Holophrastic Enterprises website is an obvious indication of the level of development you’d be working/dealing with.
The steep learning code is presumably reading undocumented code from some ex-employee’s $HOME. And as an advert for the “internal and proprietary web development platform” that front page speaks volumes….
Here’s a great example of the steep learning curve. Just now, I opened a file for a project that I haven’t seen in ten months. Client called, wanted a small change. I went to the page, it told me the files, I opened the correct file, I found the text to be changed. That text was in single-quotes (as opposed to double quotes, or unquoted). Double quotes means that I can change the text willy-nilly. Single quotes means that this text is tied to some other text elsewhere, and needs to be changed in multiple places (it’s an r-value somewhere, for example).
I searched for it, found it in three places, changed it as the client wanted, and was out. It took four minutes, and I made $35 dollars. Much more importantly, absolutely nothing went wrong. Had I not remembered that single quotes meant something, I’d have changed it in one of the three locations, and broken random code somewhere else in the project. That would have been arduous. Either it would have taken me more time to fix bugs that I may never have found, and their cascading corruption, or I’d have had to test my 4-minute effort, making it 20 minutes, because in this case the other two locations were way on the other side of the project — I’d have had to retest the entire project. Even with a barrage of unit tests, that would have taken 10 minutes, and I hope your unit tests in this case tried every single possible value, because I only changed one.
And if you’re thinking that I could have written the code to have this and a thousand other strings in a single location, I certainly could have. And then every time I needed to add something new, I’d be in twelve places, instead of in one. And the code wouldn’t be legible because values wouldn’t be with functionality. It’d work, but I hate that.
What I want is what I got. I go to the code that renders what the client is seeing, I see in code exactly what we both see on-screen, it says font-weight:bold, and it says iterate through:.makeArray(‘happy’ ‘sad’ ‘angry’) and there’s no doubt that I’m in the correct place, and changing happy to miffed is that easy, and requires no testing because the code indicates how it is to be changed.
It’s a small thing, but it’s one of a few hundred convention-style rules that need to be learned. that’s the steep learning curve.
I had a brief exchange of mails with the poster of that advertisement, after I suggested that he might like to talk to Toronto PerlMongers about his framework.
The first phase was “a) Who are PerlMongers? b) Why should I tell anybody my secrets?”.
I explained, (with a link to the web site), and suggested that he might like to come to a user group anyway; there’s always something to learn.
That produced (basically) “Your site sucks, I’m not a student, I’ve nothing to learn”, at which point I gave up.
Checking Google for “Holophrastic Enterprises” produced a first page with a link to one of his invoices, in his Web directory hierarchy, (!). Great security there, dude.
“Holophrastic” refers to a stage in infant development, about the age of 2, where a single word is used to express a complete concept. http://www.thefreedictionary.com/holophrastic
It seems appropriate to his social skills, and the word I have in mind is “Jerk!”.
I think that fact just overtook “no version control” as my favourite fact about that company!
It’s interesting that one of three buzzwords on the front of the site is “accountability”, which one certainly can’t have if the development model is “Dump everything in someone’s home directory.”
That’s actually my job posting — as in I own and operate Holophrastic Enterprises Inc., and I both wrote the job posting, and am hiring for it. Obviously a comment here isn’t the appropriate place for any sort of lengthy conversation, but if you’d like to have a more in-depth conversation regarding why source code control has failed me too many times for me to ever return to it, or all of the things that it simply cannot do but which I believe benefits workflow, I’d be happy to speak with you in greater detail. It’s not about “just getting on with the coding”, actually, the objective is much different.
If it means anything, I actually very much enjoyed your article for what it is. I’ve actually had exactly the same experience as you with other companies — those that do but don’t realize they do, as well as those who really ought to.
The real difference here is that pipelines was designed, from scratch, to curtail many other problems, and along the way its solutions made source control into something much worse than it already was. And secondly, admitedly, a given project in my business doesn’t have a team of six concurrent programmers — it has one or two working independently or together. So many of the real-world advantages of source control are entirely unapplicable to my business.
Anyway, I’d enjoy discussing this further, if you’re actually interested. I know I am. Just last year I spent a few hours with Joel of Joel on Software, at his seminar on distributed source control, in an effort to find a way to make it work for me. It doesn’t.
Thanks for getting involved in the conversation. You won’t be surprised to hear that I find your arguments unconvincing 🙂 As I said in my post, I’ve worked for dozens of clients over the last seventeen years and I can honestly say that I’ve never worked in an environment where a version control system hasn’t improved things immensely.
That’s not so say that such environments don’t exist, of course. Perhaps I’m (either consciously or subconsciously) avoiding clients whose methods don’t match with my own.
You’ve made a lot of claims about the power and ease of use of your Pipelines product. I’m sure I’m not the only person who would be interested to see some sample code. Do you have a small demo project that you could show us?
Perhaps you’d consider giving a presentation about your product at a local Perl Mongers meeting. Or maybe at a conference like YAPC. If your product is a good as you say it is, then the Perl community would love to hear more about it.
I’d love to give a talk on the reasons I’ve made pipelines totally different than other platforms, and the reasons that I’ve built my business to achieve a service unavailable from most others — by compromising many things that others never would. But I really don’t have the time.
You’re talking to someone that you’ve discovered because he’s trying to increase his staff. And you’re asking him to spend at least a day preparing and at least a day presenting. I’ve got meetings and work coming out of my ears.
More than that, the reason the infamous line appears in my posting is because I recognize that so few programmers are interested in this sort of coding style. And you’re asking me to present mostly to those who are uninterested from the start. That’s not a receptive crowd but more than that, the goal of walking away with someone interested in joining my fight is unlikely to be achieved.
It’s not a waste of my time, but it’s a time cost that I can’t afford today.
So here’s what I’m going to do. I’m going to try to convince you not that pipelines is good, and not that my style is worth anything at all. I’m going to point out the many sacrifices that you make for source control. And hopefully you’ll watch yourself make them going forward. As a technical person, you believe two things: problems can be solved, and problems are worth solving.
So I’m going to point out the problems with source control, and you can observe that they are worth the expense, or you can believe that you can solve the problems, or you can believe that I’ve solved the problems. Either way, whether I’ve done, or you’ll do it, there are benefits to not using source control.
First, keep in mind that I have no solution for a large team of programmers working on a single project. I also have no solution for a supervisor to see which programmer has programmed what over the course of the last month. So for large efforts, or for management purposes, I haven’t even tried. They simply aren’t my concern.
My problems with source control have been the following, and it’s been true from CVS all the way to mecurio. They simply don’t track code changes. They track file changes. But I decided long ago that I wanted to bulid sites without being stuck to one page one file. One of my files covers multiple web pages, typically I have one file per site “system” or sub system. That’s also a lie. Because one file tends to be the client-specific project-specific code for one sub system. All of the generic code, or my way-of-doing-things winds up being up a level either in a different template file or in an engine file.
I designed pipelines to constantly evolve, so code moves from a simple hack for one project, to a general hack for one client to a well-structure code for one client, to a totally generic code for any future project to a project-specific code in the engine, all the to generic code in the pipelines engine running fully automatic and never needing to be typed again. As a dumb example, showing press releases can be at the lowest level a select statement where published NOW() if the table being read has a published column.
So the point is that the same code, virtually identical, didn’t get changed in the file. It moved. From the news.project.tmpl file, to the utilities.project.tmpl file to the utilities.client.tmpl file to the news.tmpl file to the db.client.pl file, to the db.pl file, then to my sandox from which all new projects are installed.
Source control very quickly shows me adding code and removing code. Which is meaningless. The code moved. And it doesn’t show that. Similarly, my templates are chunked into parts. The order of the parts matters to humans, but not to the functionality. And, of course, source control says that the entire file changed when I shift code around.
Especially when I shift a block of code inside a chunk into a simple function-call equivalent elsewhere in the same file. The entire thing didn’t change. I didn’t add a new component — I did, but I just broke it out.
So source control doesn’t show movement. Which means it’s a strobe light to my code. That’s terrible.
Next, there are always conflicts. There’s no way to get around them. Source control has conflicts presented at the end. That’s just too late. In my case, a merge conflict in one file doesn’t help, because code changes files ten times a day. The conflict resolution can’t just be on the conflict file, it needs to consider all of the files that changed — which over the course of a week could be all of them.
So if my code were arranged in files according to visual studio, it would work better. and then I’d be stuck needing to arrange my code to satisfy my source control. that’s a no-no. I arrange my code according to how I work, that’s the rule.
So in the end, the primary goal of pipelines is to be able to have a client with a worknig site, and to have them come back to me two years later and want a change. A major upgrade to something that I haven’t seen in two years. I code differently now, pipelines has been upgraded about thirty times, and I’m staring at old code. I need to have about six options. easily upgrade pipelines or choose not to. easily read the old code and know how isolated it really is. and be able to quote the work instantly, within 5 minutes.
and that’s the key. cheap work needs to be easy, expensive work can be difficult. I can easily tell a client that their big change that they budgetted $8’000 for will take me two weeks to do. but the small change with no budget needs to be instantly achievable without my wasting time — so I can do it for free, and win them over.
that means every programming task needs to be achievable in three ways — a quick hack that results in code fully isolated, that will work forever, but can’t be upgraded — never hack the hack that’s the rule. Such hacks take mere minutes to do, and can achieve virtually any functionality that the client desires. not only does this meet the “we promissed our customer it would be there right away, we forgot to tell you” scenario, but it also means that the client can try something without paying much for it. I get more money that way.
the second way is to write proper code that will work forever but may not be integrated with future features. this usually takes a few hours or a day to do, and it’s the most effective option in terms of customer cost. you’re familiar with this one.
the third is to build the code such that it’s fully tied in with every feature that pipelines has today or ever will have in the future. you’re likely not familiar with this one. but to write code to support anything that ever happens at the engine level means full integration for the future. now it takes weeks or even months to do this. and it’s way more expensive, and rarely worth doing for a single client. but the payback is incredible and the functionality is really cool. Things like: now every pipelines site can be multi-language throughout, and the client need never provide translations for anything nor even speak the language, pipelines itself just sends content to be translated to my translators and just accepts it back and clears caches and everything all by itself, and I just charge the client by the word, even though pipelines consolidated identical phrases all by itself. It’s free money — after lots of hard work.
source control ruins most of that. because it’s not designed to support the human networking that I do with my clients. It does what photoshop does — it makes hard things easier and it makes easy things harder. That’s not good for business with my clients.
It’s worth noting a few things about pipelines. pipelines executes very slowly. it does so for three reasons. first, it makes development wicked fast by letting the developer program the programming, if that makes any sense. and while most of its work there could be done at template compile time, forcing the developer to specify what can and can’t be done ahead of time would be confusing, especially because new functionality for the client would change that dynamic. things aren’t allowed to break when new things are added, so that’s a no-no.
the second reason that pipelines is slow is because it’s fast enough. Between 0.8 seconds for a page load and 0.1 seconds for a page load, my clients care but won’t pay more for faster. they’d rather I do other things for them.
the third reason is that pipelines assumes that every page of every project can be as complicated as the most complicated page on the web. which means that it starts off as slow as it will ever be. the nice part is that I never need to worry about a page getting slower when I add new features. the bad news is that it takes about 0.4 seconds for pipelines to load a blank page today. I don’t care because I can always throw it onto faster hardware. I really don’t care because on the list of future upgrades is a quick fast cgi shift where I can keep pipelines loaded in memory and not only save all of that time but benefit from it. It’ll take about three days to upgrade pipelines for fast cgi, and it’ll just suddenly work on every version of pipelines from the last five years, for every client project that I choose to propegate. If you ask why I haven’t already done it, then you’ve missed my previous line — no client cares enough to pay for it.
pipelines changes, a lot, from project to project. anything that a project does for the first time, eventually gets elevated up into the platform. it’s awesome that way. in the end, pipelines winds up being a collection of everything I’ve done in the last 15 years for any client. I’ve never met a platform that evolves during each and every project before. I’d never want to have the same work required the third time I do something.
this got way too long way too fast. in short, it’s about making my work effort easier for the tasks that come up by my clients. source control ruins all of that because it prioritizes the wrong things, and it puts up roadblocks at in-opportune times.
A bit of a warning here: You say you budget three days for upgrading to FCGI. However earlier you said “Not only is everything global”. I’m not sure if you’re aware of this, but globals persist between FCGI requests, so you’re very likely to run into issues there and will probably have to rewrite pretty much everything that creates or accesses globals; OR create a massive resetter function that will need to capture every single global.
So, you’ll probably want to budget a lot more than three days.
When I said global, I meant the concept not the scope. all of the pipelines data is stored within the global variable $oPipelines. inside are templates, session stuff, human stuff, and everything. so that massive resetter function you mentioned is simply undefl-ing a few hash keys. actually, some of those hash keys will be written to a file for quick retrieval — like for the current human, who’s likely to show up again soon.
actually, persisting global variables is exactly the reason that I want fast cgi, it’s actually the only reason.
so the entire upgrade is spec’d out, has been for a year. as soon as I have the excuse to spend the three days, it’ll get done.
Alan Rocker, thanks for the heads up about the invoice making it onto google. while I can’t do much about my client giving a internal link to google, nor google choosing to crawl links within my client’s e-mail — if they used gmail — I have changed my client’s passcode, so you won’t find that invoice through google anymore. Unfortunately, sending url-based passwords is still the easiest way to get invoices to clients, if they choose to publish that invoice inadvertently, their able to do so. But I appreciate your pointing it out none the less.
Matthew, if you like version control, and home directories, and open source platforms, by all means enjoy them. I’m not asking you to give them up. I’m not even telling you that you should be giving them up. I don’t even think that you should. I’m simply saying that I didn’t like them, that I hated that part of the job when I worked for others. And so when I started my own everything, I decided to get rid of that aspect of the job.
So I chose to design a platform where the code was very legible, where it would be difficult to write and really easy to read. there are no home directories, there is no other-employees-code. It’s a very different way of programming, and it’s totally a terrible idea for most software and most web-sites. And it’s excellent for the business that I’ve built. But that’s what custom-designed-from-scratch is all about. You can sacrifice absolutely everything that you don’t want, in order to benefit from the things that you do.
So if I told you that pipelines results in code that doesn’t need to be documented, because not only does it read like english, but it guides you through the coding such that there really is only one way to code something — all of the other ways become very difficult very quickly — then you likely won’t believe me without seeing it. But if it is true, isn’t that a nice thing?
And if I tell you that I abhore unit testing, in part because it’s just annoying, but moreso because when clients drastically change their objectives, and I re-write the code to match, I’d need to re-write the unit tests to match as well, then you won’t believe how drastic those changes are. And if I tell you that those unit tests will fail when the code changes, and not fail when it doesn’t, and hence won’t actually help me going forward, then you’ll tell me that I’m not organizing my code to re-use functionality properly. And here, I’ll tell you that you’re correct — because I don’t believe in re-using project-specific business logic in multiple parts of the code because invariably my client changes one side without changing the other, so I’d rather they be split from the start.
But none of this matches what you’ve seen with your clients. So since I’ve spent a few hundred thousand dollars on my solutions to my mistakes over the last 18 years this business, I’ll gladly suggest that you’ve done the same. And with different problems and different mistakes, and totally different clients, I’d imagine that your solutions are understandably different from mine.
Alan Rocker, you’re being unfair when you paraphrase our prior discussions. I blew you off because you contacted me as someone applying for the job. You e-mailed jobs@. I was upset because while I had allocated time to deal with the hundreds of e-mails and applications that I received, you wasted my time. Had you contacted me by phone, for example, I’d have found the social time to talk with you as friends. My telephone contact is clearly available on my site. My e-mail is not, because people pay me for the right to e-mail me. I call them clients. They call themselves clients too.
Alan Rocker, back to the google invoice thing, there actually is an easy way for me to improve the problem long term. I could ensure that pipelines refuses to display any database information from the invoice table to any visitor with a referrer from any search engine. That’s about, oh, 30 bytes of pipelines code. It’s not worth my caring about in this case, since the only thing that happened was that a client published their own public contact information, and a random dollar amount, but it’s a valid solution none-the-less.
It’s also a really cool upgrade to pipelines to be able to limit tables and records based on referrer, or any environment calculation — some records may only be retrievable for mobile browsers, for example. So I’ll add it to the list of solutions to non-existent or un-important problems, for a future scenario.
Thanks for the inspiration, that’s pretty cool.
Or you could just emit the headers that stop search engines from indexing pages on the sensitive parts of your site. Or, perhaps more simply, you could use robots.txt.
Actually, on principle, I refuse to use such search-engine instructing devices. I’m not going to start following their rules for how I should build my sites. There’s no end to that. Each engine can have many different rules. Crawling is their business, not mine. If they don’t know how to crawl a site (built properly with normal links), then I’m not here to help them. In this case, google indexed a link that doesn’t exist anywhere but in an e-mail. That’s their problem, and it’s not mine. It embarasses them, and not me.
quite frankly, it’s border-line illegal for google to republish my site content anyway without my permission, especially for profit. snippets may be small enough to count as free use, and deep-linking isn’t something I’ve ever considered a problem. but my using a robots file is my becoming complacent. If I start following their rules, I can never legally fight them. By not having a robots file, and not instructing them in any way, I have no relationship with google, which means that I get my full rights under the law — should I ever need to exercise them.
this invoice is a good example. had it been something truly damaging to have published, I could have sued my clients for giving it to google, which really was a violation of our mutual NDA. and had they then said it wasn’t them it was google who took it from them without permission, then they could have sued google in turn.
but had I a robots file, my client could have turned to me and said that I instructed google what to take, and therefore what not to take, or vice versa. the result would be that I’d be at fault for having forgotten to add a line to a robots file when I bulit my innvoicing system now 16 years ago, long before google.
now obviously I’m not going to sue my client, and I’m not going to win a suit over google about something like this. that’s not the point. the point is that whomever it is who’s suing me for divulging the information won’t win either. more importantly, I will not have been the one to breech the NDA. it’s an accountability thing.
So, you make/host sites for people, and don’t do anything related to SEO for them? Not even basic sitemap updating to ensure Google and Bing know about their sites?
google and bing don’t need sitemap updates if your sites are built properly. my clients’ sites tend to get crawled a few hundred times each and every day. and all without my worrying about a sitemap. pretty neat huh?
ok new plan, as I do things today, here’s where source control would have hurt me. I just spent twenty minutes with a client on the phone, tweaking the functionality of a page, back and forth, about thirty times. Done is done, they get to access the live working version. In part because they aren’t interested in accessing a development version, and in part because once they say it’s done, they expect it to be live instantly on their next call up their chain. if i had mirroring to do, or commiting to do, and any problem happened along the way, I’d have told them it was done and then it would have been instantly not done. that’s a lie in the business world, and it’s never rewarded. that’s an IT problem that should never affect the client’s day. instead, I got a lot of gratitude as this client for the umpteenth time expressed much gratitude that I’m the only programmer he’s ever met to be able to make changes on the phone in real-business-time. and I get more work for it.
again, worth noting that this is not the better technical option. it’s not safer, it’s not more secure, it’s not more stable, and it forces me to be quite vulnerable. but it makes more money and makes clients much happier.
yes, in a few days, one of us will find something wrong with the new page. probably the client because I won’t be looking. and he’ll call me, and he’ll say that it isn’t working right, and I’ll fix it instantly, and he’ll be happy. my clients don’t mind things going wrong, they only mind things not being remedied. So if de saw it working, and he was happy, and then it stopped working and he called and it got fixed in five minutes, that’s the accountability buzzword commented on by someone earlier.
my clients are real business clients, and they understand nothing is perfect. but they’d rather spend the money on new broken stuff that will get fixed than on perfect stuff the first time. it lets them try more stuff, and they trust that it’ll be right eventually. to my clients, not having the feature for longer is worse than having a potentially broken feature sooner — provided that it will get fixed as necessary.
it’s a different world. and source control would have, or could have, just ruined my client interaction today.
I just tried to make a change to something that I was building, and it didn’t work as I expected. Instead of sitting back and thinking about it, I threw that particular mailing (an application notification) into a debug mode to send all e-mails to me, instead of to the intended recipient. I then threw tracing code into 15 different places in pipelines, and quite easily saw everything that I needed to see. Now I could have done this in a way that only affected me, or I could have done this in a way that was very temporary. But instead, I did it on the live site.
Yes I risked getting some debug information from public visitor hits, but this page isn’t popular yet so I didn’t care. And no, the public will never see debug messages, what kind of stupid platform shows debug messages to the public — right, most of them, but not pipelines.
so in twenty seconds, pipelines layed out exactly what I’d done, and therefore what I’d done wrong. I was dumb, big surprise. So I ripped out my trace code, and the whole thing took 90 seconds.
certainly, I could have programmed differently — slower — and it wouldn’t have happened. I could have had all sorts of code to be able to track things down routinely — and then had to maintain that code.
but instead, I was able to throw code throughout the engine to find my very simple template mistake without having to care about anything. and that’s the point, I didn’t need to care. it took zero brain work, so I could keep enjoying the beautiful weather through the open window.
now just how many commits and mini sub tasks would I have had to coordinate going through source control. I basically just changed half of the platform, and then changed it back. and why shouldn’t I have that kind of control? and why should it be inhibited by a control system at every step?
so that’s it. nothing stopped me from throwing print statements everywhere, and resolving the fact that I’m an idiot in an easy near-instant way.
What do you do when you make a live change that breaks a whole bunch of things that you aren’t aware of until a few customers down the line run into it? Do you have to go back in immediately and find out what’s broken, and why? How do you deal with that when revision control systems were specifically made to handle that gracefully and speedily?
Well, there are three answers to that. First, because pipelines template code is written with a certain amount of isolation — it’s all about the walls — changes made have very limited scope. Pipelines requires the developer to test (actually execute) the code that was changed, before it’ll use the file publicly, so those types of mistakes are shielded. The core is even more restricted in scope in that one line of code rarely has any effect more than a hundred lines later. On top of that, the errors that pipelines produces for syntax and semantic and security errors presents enough information to fix things in seconds.
But you’re talking about real breaking. You’re talking about I changed a few hundred things, it’s all wrong, and I need to roll things back.
Obviously, there are nightly backups. Grabbing the file from yesterday is never more than six clicks away. But that’s usually only necessary when your editor blows up the file — which happens from time to time.
The side you don’t know is that pipelines is what I’ve coined “atmospheric programming”, which is going to take some describing. If you imagine the difference between an object-oriented fully scoped world, and a flat file where everything is a global variable, sming the pendulum all the way further. Not only is everything global, but everything’s structured to be accessed from everywhere.
This is important, because it changes the types of things that go wrong, and the way in which things break.
So, for example, some part of the pipelines engine, all by itself, loads the current human (usually the logged in user). It does it whenever it needs to, handles the caching (poorly at this time by the way) grabs permissions and relations and histories and whatnot. Once pipelines has done that, it’s just always there.
So the rest of pipelines, including the template code, can just pluck anything about the human right out of the air. That’s why I call it atmospheric programming. It’s just floating there to be used. When a line of code wanting human information is executed, pipelines loads whatever it needs to, and boom, there’s the human information.
The same is true for database information, cookie information, cgi information, caching tools, debug info, trace info, mailings, e-commerce purchases, that pdf being generated — I’m just listing my engine files now) etc.. It’s all just there to be grabbed.
Similarly, because all of the front end of pipelines is written in pipelines, this atmospheric structure makes it all the way into the business logic front end that you code every day. So when I sql selected something somewhere in the template, it’s available everywhere after, by anything, including the engine, the mailer, the debugger, the tracer, and the rest of my template code.
That’s the big part of the isolation. But it’s upside-down from everything else I’ve seen. The scope of the variable isn’t isolated at all. But the code to get it is very isolated.
So when you talk about the things that go wrong, in a big way, during development, when it’s not a dumb and obvious syntax error, it’s that you didn’t select it properly from the database, or you didn’t output it properly to the screen. It’s not both.
Actually, 60 minutes ago, a long-term client said that a know user wasn’t showing up in a search result and should have been. I went to the spot that determined the results, and saw that the intended human was found. I went to the spot that output the results and saw that it was limited to 500 matches, so I removed the limit and I was done. The limit was there for speed before, it took them three years to reach it, so now I re-programmed the block to run faster. The whole effort took five minutes, with no pressure.
The world of revision control would really be nice, and this is why I keep looking at it every year, when I back-propegate. As a business principle, I keep every client (not project) on a different code base. Each client has their own pipelines installation (56 pl and tmpl files and a mysql schema). So as I develop a project, and pipelines evolves, it evolves for that one client only. Every time I make a change to a core pipelines file, I note the change in a dumb text file. Every few months, I spend about three hours and go through each change to push it back to my sandbox pipelines (from which new installations are taken). The best part is that it’s the perfect opportunity to effectively code-review what’s happened, and it’s late enough that follow-up changes have already been done so those evolutions are known-stable. When projects go through a major upgrade (with major dollars), I push the new pipelines over the old pipelines — follow my list of instructions on any changes which may be required to client templates or data — and move on swimmingly.
The whole back propegation process is an afternoon every quarter. Forward propegation happens every few years for each client and usually takes under an hour. I’ve got it down to a science, and it offer me great business administration benefits, but it’s the most daunting thing in the world, albeit for no good reason.
The reason revision control has failed to fix this aspect is because pipelines doesn’t have versions this way. It has 75 different installations, each evolved differently, and then merged together months after the changes were made (that’s all required of the business model, and it keeps me very safe in terms of how many clients I can lose from a single mistake). And yes, sometimes, two different clients were upgraded in two different ways in the same function, and I get to figure out how I’m going to merge both features. But that’s too rare to worry about.
Personally if you ask me, Pipeline sounds like a huge flaming piece of shit. You’re clearly an incompetent developer with no formal education in writing software. That’s why your platform takes 0.4 seconds to load a blank page. Version control is no more restrictive than typing one extra command every so often. It provides a window into the past, allowing you to see the motivations for writing code a certain way. That’s a pretty powerful tool.
Even the premise of your argument is crap. You say that version control only shows lines added and removed, not lines moved. Well a diff of a single file only shows lines added and removed. Version control systems show groups of files with the lines added and removed and a commit message saying “I moved this from here to here because XXXX”. Claiming it doesn’t do what it does and then stating you’d rather use nothing is ludicrous. The ranting monologue about how you’re really efficient because you can just fuck around live server (since when did VCS stop you doing that?) shows that you are almost certainly delusional. See Dunning-Kruger Effect.
actually, 0.4 seconds is really past, considering that IBM’s websphere and similar web servers take minutes to start up. The fact that I still have pipelines starting up for each page is simply good-enough-for-now to my business.
as I said, pipelines isn’t about moving a line occasionally, it’s about drastically moving and reshuffling files constantly. there’s information in the organization of information, and that’s the point. If I type that comment of yours every time I move some lines, all you’re going to see is that your diffs on groups of files say that files were created and removed thirty times, almost every line changed each time, and nothing’s the same. and then you’ll have my thirty comments that say I moved stuff. the inspiration isn’t required because where I moved them to is the understood reason. I moved it because I wanted it over there. if you know pipelines, you know why over there is better.
more importantly, you just had me type more in comments than I did in code. so you basically tripled my work. and because it becomes a deluge of revision information, it dilutes any legitimate information.
it’s also important to note that going backwards is rarely an option, because data and databases moved forwards. this whole rollback concept doesn’t actually exist when client data is involved. so you’re affording me a feature that I can’t use. you can’t rollback a field in a table that’s been populated, or a conversion of every value in that field.
but what you say in entirely true. pipelines is actually a huge flaming piece of shit for you. it’s not the way that you code, and it probably doesn’t suuport the business model that you use. I’m not asking you to use it. remember, I’m not only happy here with pipelines, I’m successful with it. I needn’t your approval. if you’re happy with your tools, more power to you.
but you’re defending version control as though it finally solved every problem that existed before it, has never needed to be improved, and will never be replaced by something better. I’m telling you that I replaced it with something better for me. Let’s see if you’re still using the tools that you want me to use today, in ten years. or even five. And we’ll see if I’m still using pipelines in the same ten years or maybe five.
it’s a good bet that I’ve already been using pipelines for longer and for more projects than you’ve been using your existing tools. I’m going on 6 years now with the current form of pipelines (it had a god-awful prototype form before that, terrible syntax).
Am I the only one who found the salary range was the most hilarious part?
I was making more than $45k CDN 15 years ago straight out of college at my first job.
$20K CDN is less than QSR or Retail wages.
Canada is ‘foreign’ for most of us, so we probably didn’t spot that. Anyway, with the Pipelines you won’t really need to be working that hard anyway … it does so much of the work for you it’s almost like working in retail or fast-food 😉
Yeah, Ontario’s minimum wage is $10.25, so a full 8 hours on 252 working days out of the year would be $20,664 at minimum wage.
I suppose if you’re aiming for the developers whose primary career options are Best Buy or Tim Hortons, it’s a good enough salary for that.
I think you’ll notice, if you read it a second time, that the salary is 20K – 40K, not 20K. you’ll also notice that the hours are flexible.
given that there are two universities within a ten minute cycle, students working at their convenience part time making 20K is well above most other opportunities available to them.
And Chisel is correct. I’m not looking for a senior programmer to design a platform. I’m looking for a junior programmer to use a platform and slowly learn how to improve that platform. For that programmer, it’s an opportunity to work on many more and many larger projects than they’ll ever get to do as a normal programmer at a normal programming job.
And if they really are available, or they choose to actually do good work from home, then the 40K puts them well into the green. For 25K in oshawa, their university and their house and their food is paid for the year, along with books and electricity and a car. Life may be a rip-off where you live, but in oshawa it’s not.
And besides, I think it’s clear that I’m not trying to attract you.
By your own admission the pipelines system that you use is basically unique to you — it even includes some new programming paradigms like “atmospheric programming”.
Do you ever worry that hiring a university student to work with you will be bad for them in the long run? They may get much more experience than in other gigs that they’re capable of attaining, but all that experience is only applicable to your business.
While it doesn’t impact your bottom line much at all, once that student goes on to work at a different company, they might find themselves completely incapable of performing their job because the whole process is so different from what they learned at your shop.
I guess when you make a piece of crap like pipeline with no understanding of what you’re doing, you don’t really need source control.
I work as a web developer /programmer at a British university. We are regarded as cutting-edge in the HE development community. The other devs and I begged and pleaded for proper version control after the big boss “took agin” SVN after a 5-minute experience with an unfinished/misconfigured SVN server, when he was able to download code he thought he shouldn’t be able to see.
The big boss then created a shared folder “code repository” and said “this’ll do ’til we can get Team Foundation Server!” at which point I vomited into my wastebin for several minutes.
Eventually, we got a verbal commitment to use Git as the departmental VCS. However, big boss wasn’t too happy about Git bare repositories (which were the only way to use Git properly on our cloud shared folder product) as he wanted to be able to read and make changes to the code. In other words he didn’t mind me and the other devs using Git but didn’t want to engage himself. He just wanted to make any changes he wanted, potentially release code and then let us crowbar that back into our version management strategy / method afterwards.
He didn’t mind there being a single-user repository on the cloud shared folder, but that doesn’t scale to more than one person.
His bedroom hobbyist approach to coding is a bit of a nightmare. OK it’s his train set but I’d really like to see some proper software development best practices round here. We’ve finally hired a CI specialist to a senior position who has cool Jenkins experience though so maybe we’ve done it.
Still might have to use evil TFS though. Microsoft are a bunch of corporate buccaneers who’ve spent decades raping the industry. Can’t they just f. off? Still, at least they added Git compatibility to Visual Studio 2012. I refuse to say that that’s “cool” – Microsoft ache to be cool, and I think their striving is pathetic – but it makes them a modiucm less evil.
I’d still happily nationalise / open source the lot and cut all the heads off the Microsoft hydra, but Git’ll do for now.
Even working alone, a good VCS really helps.
You can be more fearless in deleting code, in particular.