I have always told myself and anyone asking my choice of being a computer science generalist is a choice made with utter awareness. There are basically two paths you can take in the IT industry if you like to work close to tech. One is that of the specialist. The other is the generalist. 

Specialists are dedicated professionals with extreme skills and a burning desire to continously sharpen themselves in their field of work. I have met many of them, and they are all fantastic people with true passion whether they work as DBAs, .NET devs, Integration developers or business intelligence people – they all share the common need of perfection.  

Generalists as myself on the other hand tend to work in the cross-over-field making use of multiple technologies at the same time without really mastering any of them. It can be a somewhat frustating situation since you really need to rely upon the experts, the specialists, to solve the more tricky matters, sometimes leaving a sour feeling of never being able to accomplish things on your own.

Although specialists are much sought after resources in a competitive world, I would never trade my place with anyone of them. That is mainly because of two things.

First of all, generalists are survivors. We don’t let any major shifts in technology affect us. Our mission is to bring different tech togehter, maximizing the overall use. From our perspective, every tech part is interchangeable. That makes us a lot less vulnerable on the market. Technological shifts are extremely common today and occurs ever more frequently.

Secondly, the toolbox of a generalist are grand. More so than that of a specialist. Specialists carry shiny, golden hammers, and they sure know how to use them, but when the mission to saw the log into two halves arrives they are lost.Believe me, I have met a lot of developers and architects with the agenda to solve each and every business problem with their specific tool. “Hey, let’s build the ERP in BizTalk!”, “Wow, we could really create this webshop using nothing but Sharepoint!”, “Modeling an invoice you say? I’d prefer to do that in T-SQL”. In fact, I know some of my friends and colleagues from time to time would say I myself act that way, but that is not my case. What they all tend do miss is the drivers behind my vocals for a technology or another. When I speak warmly about BizTalk, it is when we face heavy integration. When I speak warmly about Azure, it is when we face scalability issues. When I speak warmly about services, it is when we need to break free of the monolith.

Ironically, these misinterpretations are often spoken by people on the more specialist side of the scale. That said, the ecossytem of IT needs both specialists and generalists. We would not survive without one Another. But in that world, we also need to learn how to live in harmony.

Application Lifecycle Management. I suspect this is the first post in an upcoming series about my everyday life as a product manager and the center of my universe, namely Microsoft Team Foundation Server. My TFS pal is to me what an ERP would be to corporate controllers, CFO or CEO. I would even go so far as to say the ERP for us as a Product development company is well behind the ALM system in a matter of importance.

The first thing to understand is the difference between delivery of “solutions”and products. To me a solution is exactly that. A solution to a particular business problem or need. A solution solves one problem, and does so well, but its main characteristic is that it solves one or at least very few problems, and perhaps in such a tailored way  that it would be of little use trying to apply it elsewhere.

A product is something of a meta-solution. A product solves further generic problems, more business problems or fullfills more needs. But most important, it has a life. And a Death. New requirements arise to be incorporated in the frames that make up the product. Old requirements change and therefore the product needs to change. At some point the technology stack the product is built on has been surpassed by more innovative ways of doing things, or the businesses making use of the product have changed so much the product must be laid to rest and replaced by something new.

To respond to those needs and ever changing requirements without having to build a new solution for each change is what product development is all about. Doing so and also keeping the time to market short and the quality in the product high is virtually impossible without decent tooling. The tool for doing this in my agile Environment is TFS.

In the next few posts I will show some neat things in TFS 2013 and how it affects my team, my work as PM and what the platform has done for us regarding product quality.

The retrospective

Posted: 14 September, 2013 in Scrum

The first sprint has come to an end. All user stories except one were completed as a result of an amazing team effort. All ceremonies worked fine and everyone teams up great with the new customizations for TFS we deployed pre-sprint. Awesome also we got the top management interested in the sprint demo, showing up and participating in a nice way.

Great work from everyone. Ahead now for the next sprint!

vNext – First formal sprint

Posted: 30 August, 2013 in Scrum

Things are really tightening up. So much looking forward to the upcoming week. The backlog grooming session made wise priorities. The planning meeting was great. All estimates are up, user stories broken down into appropriate tasks. All toolsets are rigged. The story board shines, iterations and artifacts placed where they belong. We’re all set and good to go for the first real formal sprint using scrum and TFS.

Team feels commited, more so than ever and we have all management on board for this. I’m in the best possible place right now. So proud to be part of this team and organization.

 

This will be the first in a series of blog posts about a problem and solution to an every day problem I have found myself in the middle of for a while now. As I expect the solution to grow over time I shall try to do some following up on the matter a little now and then.

The basic problem is a non-unique one for people in the software development loop: Version control of source code. As a company developing software products, it’s crucial for us being able to quickly find the correct version of our source code, whether it’s about deploying the latest stable release of the system, a given version, or the vNext for demo purpose. Intermingling different version is bad at best and extremely critical at worst. Also, with the ability to quickly roll-back crappy check-ins or finding the person doing the last check-in of a component to ask those specific questions is highly valued in the steadily faster spinning hamster wheels.

So, you might say, what is the problem really? There are tons and tons of great versioning tools out there. Microsoft TFS, Subversion, Git, ClearCase, Visual Source Safe, what have you. Surely we have one or more of those systems at our disposal? Yes, indeed we do. For quite a while now MS TFS has been the weapon of choice, since it integrates smoothly with both our development process (check out my earlier posts about Scrum) as well as our .NET-based code and Dev Studio. And by that the story would have been told. But…

As it often turns out in real life, nothing is really as simple as that. We have som deviances which complicates the simple scenario. To start out, our main product is one built on the foundation of Microsoft SQL Server. The DBMS is truly great and I must say I have turned a hatred and despise into something better described as tolerance, and in some cases true love. Lucky so, since all business logic is placed in stored procedures, user defined functions, triggers and views.

Better yet, almost no tables have the technical keys you expect to find in traditional databases and relations. These keys exist, of course, but only implicitically – and (oh my) – their names are coded using a special kind of syntax and hard-to-find semantics. The system is probably one of a kind and would send the heads spinning on the best DBAs. To add further complexity, the logic is intertwined with data itself, and very often with specific row versions of data, generations of data etc. As a result of this, there is not a single DB instance holding the true view of system versions. There are several.

To be more specific we have exactly 156 databases on 4 different servers running per today. These 156 databases are made from about 187 000 objects inalles. Many of these databases are old and should be offlined, others are just temporary ones which have since long passed their expiration dates. Again, others are exact copies of each other and just dumb and redundant.

The problem really is that no-one can be sure what is being used – and when. As you probably have guessed by now, these 156 databases are not under source control at all and it is all about disciplin with our devs and consultants in word-to-mouth and tedious hours using diff tools such as SQL Delta and Adept SQL Diff that we have the slightest chance of deploying the correct versions when needed.

Also, since the situation has gone a bit out of hand, it has now reached a point where it is almost impossible to implement a source control system such as TFS. Even if we’d manage to do that it would be very hard to find what version really is deployed to what server at any given time. Just because a sproc is checked-in in TFS it does not mean it also have been deployed to one or more servers.

Step 1 ought to be: Start gathering information about our 156 databases to see what is used, when it is used and who the people really are, committing the changes.

If there is one enterprise in the world today that most people love to hate, it seems to be Microsoft. When Vista was relased, people cried in agony. When windows 7 saw the dawning day it was a disaster. The ribbon GUI in Office 2007 made it impossible to get things done since someone had moved the cheese around. DRM debate has been around since the trusted platform in good old XP Days and before that of course. Now we see it all again with all critisism surrounding the Xbox One.

Sites built in SharePoint are slow, the performance in Biztalk is plain bad. Windows phones are nothing compared to iPhones. Azure is expensive. Lync quality unreliable, windows servers ages behind their unix and Linux counterparts. What a lousy company. With all these failures, can they even exist?

Apparently, Microsoft is still very much live and kickin’ and people use their software around the globe. Are people crazy then, since all things coming from Microsoft is considered crap? Of course not. What makes Microsoft great(est) is their ability to continously deliver a solid user experience and a level of good enough usability across ALL platforms.

I agree Apple (until recently) makes pretty devices such as iPad and iPhones. They are clean, simple to use and feels luxurious. But that’s all they do. I agree the PS4 is probably a more edgy console than the Xbox one, but again, Sony is not concerned with much more than products for home entertainment.

If you only look at one slice at a time, Microsoft loses all battles. If you instead look across the entire battlefield they win the war. There is no secret I’m a fanboy but my dedication to it has only grown stronger with Windows 8. Before, I loved how smooth transitions were in swapping between different server tools. If you have seen the MMC and the basic set of tools in the server environment there really are no magic in products such as Biztalk, SQL Server and Sharepoint. And when you get some insight into them you start seeing how beautiful things come together.

I got the same kind of experience with client side tools and windows 8 recently. Starting out with the OS on an ordinary laptop, the experience was a bit awkward. The point of killing the start menu and introducing apps on a machine with no touch capability was all but nice. I then started using the windows phone 8. With very few apps and a not so intuitive GUI the switch from iOS was not as easy as I first had hoped. Then I got my Surface and my eyes opened. Like the scene in The Matrix where Neo takes the red pill, I could suddenly see what unfolded behind the Matrix, or in this case Microsofts vision. The Surface is a fantastic device when it comes to bridging the gap between private and social, private and professional, consuming and producing, work and entertainment. It is for this device Windows 8 was intended, but the experience is stretched across more plattform such as the Windows phone 8. Suddenly the hybrid between metro style and desktop style is obvious. Again, Microsoft has secured both the longevity of old applications as well as paving the road for the future.

The next thing we are going to hate is the transition to Kinect interaction when we just wave in the air to scroll through lists of choices and accidentely delete our files when we clench our fist cursing to the “completely unusable system”. Or, you could follow the White rabbit.

Alright, so here I am, again in an environment buzzing with development in SQL, .NET, Biztalk and all the other cool stuff. I have been formally branded a ScrumMaster, I have been informally branded a ProductOwner. I’m still a developer. I have been granted the privilegie to maintain and uphold the scrum process in our company.

Does it need to be upheld then?

First of all, to be upheld it first has to be established.

OK, but aren’t we doing scrum already then?

No, not yet. But we sure aim to do it. An interesting example of this is the Daily scrum meetings. The 15 minutes long, stand-up meetings every morning at 09.00. We do them. We talk about things done yesterday and things to be done today. It is not very often the impediments are mentioned. Tendencies are, though, that it takes quite a while for a user story to wander across the story board to the glorious column of “done”. Can it be so that unmentioned hindrances block people from working faster? Perhaps. Why doesn’t people mention the impediments then? My belief is that no-one really thinks the ScrumMaster has the power to make them disappear, and highlighting them then just makes it pointless. The struggle goes on in silence. To get rid of that fact the ScrumMaster needs to be the sweeping guy, the shit-fan-blocker and the one with the ability to make problems disappear. For those of you who read my earlier posts, Arne and Börje both were eminent ScrumMasters in this sense. People relied on them, and they mutually trusted their colleagues to get the things they were doing best, done. ScrumMastership is a matter of trust and also the capability to remove hindrances. This we don’t do. Yet.

Another obvious factor showing us we are not doing scrum is the Daily scrums themselves. If the ScrumMaster should happen not to be present in the offices on one day, the team probably wouldn’t meet for the Daily scrum at all. The DS is not for the purpose of making the ScrumMaster happy, it purpose is to make people aware of priorities and impediments. If you don’t meet up for 15 minutes just because the ScrumMaster isn’t present, question why you should do it at all.

One of the greatest things about the organization I currently work in is the team spirit. The members of the team have worked together for a long time now and everyone knows each others’ strengths and weaknesses. There is a lot of joy to the work, and no day is without laughter. People like what they do. However there is a somewhat individualistic view on most matters. People are used to having someone telling them what to do and when to do it. If tasks are handed out, they are done, but if a story just sits on the story board with no name on it, it migh stay there for some time. The problem with that approach is when someone puts their name on too many tags and don’t get them done. Since they are named tags no-one else cares about them. Team effort suffers.

If the named tags are an extremely unsscrummy way of doing thing, saying your work is agile when you keep doing all effort as a set of relays is even more so. The problem with this relayed way of thinking is that it has its roots in mass production industry where work flows as a continous process and where each and every step can be analyzed and refined successfully. “If we can make the assembly machine in stage 3 run twice as fast, we can raise production by 20%. If we invest in another packing line we can get twice as much done” is not at all analogous with “If we write the system specs twice as fast we raise production by 20%. If we invest in another developer we get twice as much done”. Industrial production and software production does not have many things in common since we in software seldom do exactly the same things repeatedly. If we do that we are actually doing something terribly wrong. Industrial prodcution however has its’ sole purpose in trying to do the things exactly the same every time. Where the sequential thinking in on production line is the life essence and a quality factor, it is totally devastating for the other production line. You do the math.

Therefore, working in small waterfalls or “scrummerfalls” where all devs sit around waiting for the system specs to be written, and all testers hanging round to wait for the code to be produced is not scrum. It certainly is not agile. Scrum is about extensive communication and getting things done. As a team, not as a set of individuals. My friends Per and Patrik knew this. The fellowship under Börje Mellander knew this. Mature teams work agile, wheteher they call it scrum or not.

Scrum is a way of thinking. It may sound corny, but it is. If you don’t know why you do things there is a severe risk that you either do them wrong or not at all. Both these symptoms still occur in our organization. Let me give another example from what occurred just the other day.

A client calls in a request for a new feature. The call was collected by one of our business analysts who immediately started to plan the work, allocating the resources (stamping the name of a developer and himself on the task to be done). In a fury of creativity, the user story was also estimated and practically sold to the client based on that estimate. Furthermore the story almost went its’ way all the way to a sprint (or iteration may be the better word for it), not yet existing. Of course, Everything was done with the best of intentions, and a lot of work was put into it by the business analyst. It however breaks almost all the principles, ceremonies and artifact stated by scrum.

A scrum way of doing this would be to collect the request and politely tell the client that we would get back to him or her in a short while. As the user story was added as thoroughly as possible to the product backlog the product owner is notified of the story’s existence. If the business analyst then makes his or her case well enought, it should be really easy for the product owner to make the story prioritized for the next planning meeting. The effort however is not touched by neither the analyst nor the PO – it is a matter of the team to decide as a collective group effort. If the effort is small enough to fit in the next iteration it is placed there. The call back to the client then would have a better estimate, a commitment by the product owner and a commitment by the team together with a possible release date and demo date (end of the sprint). If the client then says go, its is to be done. If it considered to big an effort or to expensive, since the team has broken the story down (during the planning meeting) it is much easier to discuss partial deliveries, workarounds or other ways of delivering the feature. The same thing happens if the story is to large to fit into the sprint box. It is then broken down and delivered as small increments over several sprints.

Why, then, is this approach better than that of my beloved colleague’s? Well first of all, early commitment is bad since the commitment becomes a matter solely between the analyst and the client. All other involved can rely on saying “Your estimate is faulty.” “I have other things do do right now.” “There is a better way of doing that.” etc and never really commit to the deal. By making people aware and part of the process you avoid that. Furthermore, by planning it to an non-existant iteration is risky since priorities change over time. Just because we have slack in schedules right now, does not mean we have it next month. Things important today are irrelevant tomorrow. Planning and commitment should be done as close to the construction as possible. Also making the  team taking responsibility for development of the feature is an assurance that it is really done. If you early in the process pin the task to one or two developers (tagging the user story) you are not only actively neglecting knowledge sharing, you are also adding to another person/persons workload without their knowing or commitment. Further, you are actively making other people unaware. “It is not my name on it, why should I care?” Of course you also have to deal with problems that occur when the tagged name cannot complete the quest. You end up in a mess where things have to be handed over, documented in excess, re-written etc. If you at that point already have made a promise to the client it can be quite inconvenient to suddenly have to tell them the feature takes more time to complete, becomes more expensive or such. So, just avoid that path.

I often hear people around me saying things like: “You know, we’re not like Microsoft or IBM, scrum does not fit us to 100%”. “We are unique”. “In our environment, things are not that simple since…”. All of them ar true of course. We are not Microsoft, we are unique and things are never simple.

The most important thing is to remember why we are trying to do scrum in the first place. We are trying to get things done, shortening the time-to-market, enhancing the quality and making the customers more satisfied. So far we are just like Microsoft, not very unique, and sticking to scrum is in that way easier than sticking to the old waterfall models since there are fewer things to grasp in scrum.

Even if you don’t embrace scrum or the agile methods, feel free to drop a post explaining to me why other ways are better.

Taking a minor detour, careerwise, a couple of years in an internal IT-organization was nothing but healthy for me.  Working as a IT-professional with a broad scope helped extending my view. A fellow colleague of me is making fun of the fact that IT-professionals tend to be little more than fancy-branded printer installers. He is awfully right in that conclusion. At least, my predecessor on the post sure made no effort in trying to do more advanced things.

What he left me and my colleague with was an IT environment on its’ 15th year past expiration date. Thankfully, we also got our own budget and both had very creative minds. In about two years, the machine park was upgraded from old XP machines to brand new w7 and w8 machines. Basically all employees at the plant got shiny new smartphones, extended wi-fi capabilities, failover redundancy for network components and a lot of upgrades on critical systems for production and BI.

If printer installations are the downside of the profession, there are several ups to it as well. To a very large extent we got to work really close to the people and places where things happended. As a dev you tend to be kept at a safe distance from the customer, and any involvement you have with them is mostly channeled through the feared and appreciated “super-users”, the voices of all the common users of the system. The model is quite OK, most of the time, but the problem with super-users are that their minds are to similar to the devs. Often technologically bright people, their feedback and testing usually consists of the optimistic scenarios, e.g. how things are supposed to work, not how applications behave when you do the unexpected. In that sense, their testing capabilities are closer to that of the business analysts than to the QA people. Being among the “common” people with no special interest in technology is something all devs should do for some time. These are the folks just wanting to get their things done. They don’t care if it says 2000 or 2013 on the label or if it is built on “SOA”, using .”net”, “java” or other. They don’t want it to fuck up. System fuck-ups means more work for them, and in a quarter-driven economy people don’t want more work. They already have more than enough of it.

As an IT organization we were a very tight team on our site, me and my colleague. You get a special bond when it is ok to call the other person in 3 a.m. when all production has halted due to a system error. Also we were part of a larger unit as IT-pros were spread across all four production sites, of one not even in Sweden. The challenge with this is collaboration, knowledge sharing and meeting. When I first came to the department, communication consisted of scarce phone calls and grand IT-meetings in real life once a month. When I left the organization three years later new patterns had emerged. As a frequent user of apps like skype and lync, I certainly did not introduce the apps to my companions, but I drove do make it OK to use them, and strived for making communication less formal. Being considered playthings and private apps, at some point the state of mind shifted and these new ways to communicate started growing. Not only in the IT-department but in the organization as a whole. People started sharing screens, collaborating and communicating in much more informal ways. All of a sudden, people got a better view of what had to be done on a more frequent basis. In a sense, people started scrumming together.

It felt strange leaving the project, the company and the team, but I could do nothing else at that time. I felt far away from the management in matters concerning vision and strategy as well as in matters of how far it is ok to push people.

Again, life took me on a journey that would give me a completely different perspective on life both private and professionally. My next assignment brought me straight into the fashion industry. I was to act as a consultant aid in bridging knowledge between a team retracting and a whole new team picking up what the current was leaving behind. Also, I was to evaluate the leaving teams effort in choices of technology, architecture and model for development. Before my employer sent me to the client, I remember being called into a meeting with a fellow colleague and chief architect. He told me the offboarding team at our client’s was made from inexperienced devs with quite little sense of architecture or coding at all for that matter. I prepared myself to go into a very assertive role and sharpened my ability to criticize.

Then I landed with the client and met Per & Patrik.

Both a few years older than myself, these two guys had single-handedly come up with a system made up from several thousands of sloc, written in the latest .net framework. The system was flooded with design patterns, state of the art LINQ queries, lambdas, WCF and WPF components. The whole object graph was built with dependency injection techniques. As I entered, P&P was evaluating new cool stuff with PRISM and MEF.

Needless to say, of course Per and Patrik outmatched my own skills – by many degrees. I had some catching up to do. Luckily, 6 months of boring nights at the hotel aided greatly in this. However, it was not so much technical brilliance I picked up from them as I was introduced as a new way of thinking. The way of Software Craftmanship.

Per & Patrik were warm-hearted fellows with unmatched love for their families. They were also very professional and proud of their profession as developers. None of them talked about fancy titles, calling themselves things like “senior”, “architect” or “lead”. They were programmers and developers, and oh my, they were good at it. They took pride in quality, happy users and getting things done.

They worked using a sort of scrum method with time boxes and release of a new version after each. This was the first really well-working scrum I got involved with. We had little informal meetings every morning but apart from that, we used most of the time discussing ways to improve, the craftmanship, the stability of the code, code coverage. And getting things done.

This experience has meant the most to me as a craftsman. It was here I picked up a belief I today is betting a lot of money on. No code should be written without a unit test.

My next job brought me straight into a new set of technologies, new colleagues and new project models. It was the best of times. It was the worst of times.

The company was developing tailored systems for the swedish forest industry, and had recently signed a deal with one of the largests societies for forest ownership in the country. It was a huge deal, both for the client and for us as a system provider.

It was the best of times. The colleagues were amazing and the technology we used fascinated me as I delved deeper into the world of SQL Server, C#, .NET, ASP.NET, SOA and BizTalk.

It was also the worst of times. The aforementioned project was a disaster. Thousands of hours worth of design time had produced nothing more than equally thousands of pages worth of documentation and specifications. As we bagan building the system, the client continously demanded changes to the system making much of the design null and void. Pressure from top level management to finish off the project arose, but the client grew increasingly demanding and frustrated as the dev team struggled with a system bogged down with spaghetti code, severe performance problems and such. We were supposed to follow a “successful model for implementing ERP systems” called Implex. In reality it was nothing more than a glorified water fall model with specified milestones for delivery of specification, system, testing etc.

Sometimes though, such circumstances can bring out the very best of people. The team working with the project did so coming together as a very solid unit. The mentality was that everyone got involved in all kinds of activities. Developers were making specifications, system designers were coding, everyone helped in testing. The system evolved as a fantastic team effort.

Also, our internal project manager in charge was a man called Börje. Börje has now passed away, a fact that saddens my heart every time I think about it. He is probably the one person affecting my professional life the most, and I doubt someone will ever make such an impact on me the way Börje did. At the time being, Börje was just a few years away from retirement, making a fantastic career come to an end. He had seen work as a consultant manager for nearly 100 people in the glorius days before the imploding of the IT-industry in the early 00:s. He had successfully managed projects both smaller and larger than the current. Börje simply had seen it all. As a manager and project lead, he worked tirelessly – always being the last person out of the office in the evening, often one of the first to arrive in the next morning. Again, pressure both from client and our internal top management was excruciating, but Börje screened it away from the team in the best possible ways, making space for us in our chaotic environment.

Börje had this really interesting way of staying on top of things. Every day he visited all the offices of the team members with his list of tasks to do asking about just enough status and detail about each and every one of them to make us all focused and going. He called it “Management by walking around”.

I specially remember many of our little trips back and forth to the clients office. Several hours spent on the road on many occasions created a special atmosphere for discussion and exchange of ideas. We had lenghty talks about the essence in team building, project models, how the IT-industry had evolved over the years, and a lot more. I learned SO much from him.

I miss you dearly Börje Mellander.