Friday, July 22, 2016

Review: Effective Communication Skills

Effective Communication Skills Effective Communication Skills by Dalton Kehoe
My rating: 5 of 5 stars

Impressive course, by all means.
1. The first part is more academic. Contains conclusions and presentations of a lot of scientific studies. If you are familiar with psychology and in general with the way our mind works, this part may be somewhat boring. For me it was a nice reminder of all the pshycho-stuff.
2. Then there is a great deal about personal communication. About how we communicate, about how we see ourselves and the other person while talking. It contains a lot of good tips to apply in personal and family communication.
3. Finally, the last part of the course concentrates on workplace communication. There quite a lot of valuable recommendations for managers and leaders, as well as for communicating with other team members who are working at the same organizational level with you.

Thank you Dalton Kehoe

View all my reviews

Thursday, May 12, 2016

The Future of Continuous Integration

I am wondering if I will look back at this post in five years with a smile or a frown. Foreseeing the future in IT is very difficult. Other industries change at a rate of one significant change every 50 years. When was the last revolution in excavators technology? When was the last revolution in steel processing? When was the last revolution in road building? We are more or less using the same materials and techniques as 50 years ago. Yes, we can do all the things mentioned above faster, at a higher quality, and with less costs. But we mostly improved some really solid and tested processes.

Computers didn't even existed 50 years ago. Well... there were some around, but let's say they were a toy for scientist rather than machines of mass production. However, they existed. The first concepts of software development were put in place. The first paradigms of software development were defined.

In late 1950 Lisp was developed by MIT as the first functional programming language. It was the only programming paradigm that could be used. All computers, few that were, were programmed using functional programming.

 Twenty years later structural programming started to gain traction by support from IBM. Languages like B, C, Pascal started to emerge. Let's consider this the first revolution in software development. We started with functional programming, and then we got structural programming, something totally different. It was groundbreaking and it took about 20 years to emerge. While this seems a long time now, it was what? Less than half the rate of industrial revolution that tends to happen every 50 years.

The fast pace of evolution in software continued exponentially. It was about, or even less than, ten years later when Smalltalk was made public for wide audience in August 1981. Developed by Xerox PARC, it was the next big thing in computer science.

While some other paradigms came along in the upcoming years, these three remained the only ones with wide adoption.

But what about hardware? How far did we come on hardware?

How many of you can remember the very moment when you interacted with a computer for the first time? Let your memory bring back that moment. Remember what you did, who were you with... A friend? Maybe your parents? Maybe a salesmen trying to convince your parent to by a computer? Doesn't matter. Remember that very moment. Remember that computer. Remember the screen. How many color did it have? Was a green-on-black text console, or a high-resolution CRT, or a FullHD widescreen? What about the keyboard? The mouse ... if invented at that time. What about the smell of the place? What about the sound of the machine?

Was it magical? Was it stressful? Was it joyful?

I remember... It was about 30 years ago. My father has taken me to the local computer center, his workplace. Yes, he is a software developer, one of the first generations in my country (Romania). We played. It was a kind of Pong game. On a black background, two green lines lit up at each side.

It looked similar to this image, though this seems to be highly detailed graphics compared to the image of my memories. And it was running on something like this.

Well, it wasn't this particular computer. Nor even IBM. It was a copy of capitalist technology developed as a proud product of a communist regime. It was a Romanian computer, a Felix.
The Felix was a very small computer compared to its predecessor. It could easily fit into a single large room, maybe 30-40 square meters. And it even had a terminal where you could see your code. Why was this such a big revolution? It's a screen and a keyboard after all. Yes, but your code went directly on magnetic tape, and then, in just a couple of hours you could run your program. That if you made no typos.

Before the magnetic tape and console revolution, there were punch cards and printers. Programmers wrote their code on millimetric paper, usually in Fortran or other functional languages.

Then someone else, at a punchcard station typed in all the code. Please note, the person transcribing you handwriting into computer language had little computer or software knowledge. It was a totally different job. Software developers used paper and pencil, not keyboard and mouse. They were not even allowed to approach the computer.
The result was a big stack of punch cards like this.

Then these cards were loaded into the mainframe, by a computer technician.

Overnight, the mainframe, the size of a whole floor, requiring several dedicated power connection directly from the high-power power grid, processed all the information and printed the result on paper.
The next day, the programmer read the output, understood the result. If there was an error, a bug, a typo, the whole stack had to be retyped because punch cards were sequential. If you were lucky, you could find a fix that effected only a small amount of cards and a fix that required the exact same amount of characters to work with the exact same region of memory.

In other words, it took a day or more to integrate the written software with the rest of the pieces and compile something useful. Magnetic tape reduced that to a few hours. Harddisks and more powerful processes in the '90s reduced that further to tens of minutes.

I remember when I installed my first Linux operating system. I had an Intel Celeron 2 processor. It was Slackware linux, and I had to compile its kernel at install time. It took the computer a few hours to finish. A whole operating system kernel. That was amazing. I could let it work in the evening and I had it compiled in the morning. Of course I broke the whole process a few times, and it took me about 2 weeks to set it up. It seemed so fast back then.

I work at Syneto. Our software product is an operating system for enterprise storage devices. That means kernel, a set of user space tools, several programming languages, and hour management software running on top of all these. We do not only have to integrate the pieces of the kernel to work together, but we have to integrate the C compiler, PHP, Python, a package manager, an installer, about two dozen CLI tools, about 100 system services, and all the management software into a single entity that works as a whole and which is more than the sum of its parts.

We can go from zero to hero in about an hour. That means to compile everything from source code. From kernel to Midnight Commander, from Python to PHP. We even compile the C compiler we use.

But most of the time we don't have to do this. This is an absolute overkill and waste of computing resources. We usually have most of the system already compiled, and we recompile only the bits and pieces we recently changed.

When a software developer changes the code, it is saved on a server. Another server periodically checks the source code. When it detects that something has changed, it recompiles that little piece of application or module. Then it saves its result to another computer which publishes this update. Than another computer does an update so that the developer can see the result.
What is amazing in this schema is how little software development changed, and how much everything else around software developers have changed. We eliminated the technicians typing in the handwritten code ... we are now allowed to use a keyboard. We eliminated the technician loading the punch cards into the server, we just send it over the network. We eliminated the delivery guy going with the disc to the customer ... we use the Internet. We eliminated the support guy installing the software ... we do automatic updates.

All these tools, networks, servers, computers, eliminated a lot of jobs except one, the software developer. Will we became obsolete in the future? Maybe, but I wouldn't start looking for another carrier just yet. In fact we will need to write even more software. Nowadays everything uses software. Your car may very well have over 100 million lines of software in it. Software controls the World and the number of programmers doubles every 5 years. We are so many, producing so much code, that reliance on automated and ever more complex systems will be higher and higher.
Five years ago Continuous Delivery (or Continuous Deployment) was a myth, a dream. Fifteen years ago Continuous Integrations was a joke! We ware doing Waterfall. Management was controlling the process. Why would you integrate continuously, you do that only once, at the end of the development cycle!

Agile Software Development changed our industry considerably. It communicated in a way that business could understand it. And most business embraced it, at least partially. What remained lagging behind were the tools and technical practices. And in many ways, they are still light years away in maturity compared to organziational practices like Scrum, Lean, Sprints, etc.

TDD, refactoring, etc, are barely getting noticed, far from mainstream. And it is even older than Agile! Continuous Integration and Continuous Delivery systems are, however, getting noticed. Their big advantage over software technologies is that business can relate to them. We, the programmers, can say: "Hey, you wanted us doing Scrum. You want us deliver. You will need an automated system to do that. We need the tools to deliver you the business value you require from us at the end of each iteration."

Technical practices are hard to quantify economically. At least immediately or tangibly. Yeah, yeah... We can argue about the quality of code, and legacy code, and technical debt. But they are just too abstract for most business to relate to them in any sensible manner.

But CI and CD? Oh man! They are gold! How many companies deliver software over the web as webpages? How many deliver software to mobile phone? The smartphone boom virtually opened the road ... the highway ... for continuous delivery!

Trends for "Smartphone"
Trends for "Continuous delivery"
Trends for "Continuous deployment"

It is fascinating to observer how the smartphone and CD trends tipped in 2011. The smartphone business embraced these technologies almost instantaneously. However CI technology was unaffected by the rise of smartphones.
Trends for "Continuous Integration"

So what tipped CI? There is no Google Trends data later than 2004. In my opinion the gradual adoption of the Agile Practices tipped CI.
Trends for "Agile software development"

The trends have the same growth. They go hand-in-hand.

Continuous deployment and delivery will soon overtake CI. They are getting mature and they will continue to grow. Will CI have to catch up with them? Probably.

Continuous integration is about taking the pieces of a larger software, putting them together, and making sure nothing brakes. In a sense CI masks under a business value your technical practices. You need tests to be run by the CI server. Very well you could write them first. You can do TDD and the business will understand it. Same goes for other techniques.

Continuous deployment means that after your software is compiled, an update will be available on your servers. Then the client's operating system (ie. Windows) will have a small pop-up saying there are updates.

Continuous delivery means that after the previous two processes are done, the solution is delivered directly to the client. Such an example would be Gmail web page. Do you remember it sometimes saying that Gmail was update and you should do a refresh? Or the applications on your mobile phone. They are updating automatically by default. One day you may have a version, next day a new one, and so on, without any user intervention.

Agile is rising. It is starting to become mainstream. It is getting out of the early adopters category.

Follow the blue line in the Law of Diffusion graph above. Agile is in the early adopters stage. But it will soon rise into the majority section. When that happens we will write even more software, faster, better. We will need more performant CI servers, tools, and architectures. There are hard times ahead of us.

So where to go with CI from now on?

Integration times went down dramatically in past 30 years. From 3 days, to 3 hours, to 30 minutes, to 3 minutes. Five years ago I worked on a project that had a result a 100MB ISO image. From source to update took about 30 minutes. Today we have a 700MB ISO, and it takes 3 minutes. That's a 21x increase only in the past 5 years. I expect this trend to continue to rise in an exponential way.

In the next five years build times will shrink. Smaller projects will achieve true continuity in integration. You will be able to see the changes you make to a project almost instantaneously. The whole cycle described above will be in the order of 3-15 seconds.

At the same time the complexity of the projects will rise. We will write more and more complex software. We will compile more and more source code. We will need to find ways to integrate these complex systems. I expect a hard time for the CI tools. They will need to find a balance between high configurability and ease of use. They must be simple to be used by everyone, seamless, and require interaction only when something goes wrong.

What about hardware? Processing power is starting to hit its limits. Parallel processing is rising and seems to be de only way to go. We can't make processors faster, but we can throw a bunch of them into a single server.

Another issue with hardware is how fast can you write all that data to the disks. Fortunately for us SSDs are starting to take over HDDs for everyday data storage. Archiving seems to be going to rotating disks for the next 5 years, but we are hitting the limits of the physical material there as well. And yes... humanities digital data grows at an alarming rate. In 2013, the digital universe was 4.4 zettabytes. That is 4.4 billion terabytes! By 2020 it is estimated to be 10 times more, 44 zettabytes. And each person on the planet will generate on average 1.5 MB of data every second. Let's say we are 7 billion, that is 10.5 billion MB of new data every second. 605 billion MB every minute. 6050 billion MB every hour. Or in other words 6 billion GB every hour. That is about 0.114 zettabytes each day.

It is estimated that in 2020 alone we will produce another 40 zettabytes of data, effectively doubling the enormous quantities we already produced. The trick with the growth of the digital universe is that it grows exponentially, not linearly. It is like an epidemic. It doubles at ever faster rates.

And all that data will have to be managed by software you and I write. Software that will have to be so good, so performant, so reliable, that all that data will be in perfect safety. And to produce software like that we will need tools like CI and CD architectures that are capable of managing enormous quantities of source code.

What about AI? There were some great strides in artificial intelligence lately. We went from basically nothing to a great Go player. But that is still far from real intelligence. However, the first sign of AI application in CI were prototyped recently. MIT released a prototype software analysis and repair AI in mid 2015. It actually found and fixed bugs some pretty complex open source projects. So there is a chance that by 2020 we will get at least some smart code analyses AIs that will be able to find bugs in our software.

If you are curious about more on this topic, or simply want to share your view, I invite you to my keynote speech at DevTalks Bucharest / Romania, on Jun 9th 2016. As always I will be open to discuss this and other IT, software, hardware topics throughout the event. Just ping me on twitter if your are around.
 DevTalks 2016 Bucharest Romania

Friday, April 29, 2016

Review: Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life

Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life by Michael Port
My rating: 5 of 5 stars

I have some speaking experience and I wanted to improve. I needed new ideas and some help with issues I found in my talks. The first part of the book was somewhat boring for me, but the rest was amazing. It is a really good book, with ideas that apply in a lot of circumstances. From speaking on the stage to thousand of peoples to speaking to your wife in private, there will be something for you in this book.
I listened to the audio version of the book, but the written one would probably be a better choice as there are a lot of things you will want to revisit from time to time, and searching audio books is just too difficult for me.

View all my reviews

Wednesday, April 20, 2016

Your Career - Five Years in The Making

About two years ago I've read a statement from Brian Tracy that seemed extremely bold at the time. He says that you can get from novice to world wide recognition in five years.

Of course this won't happen magically. You have to work for it. You have to learn and invest your time and effort into it.

I started my professional career as a software developer in mid 2009. Before that, I was a systems and network administrator and did only occasional software developer. By any standards I was a novice software developer. I knew the very basics. I wrote a few irrelevant applications. I always programmed alone. I never worked in a team. I never even bought and read a programming book. All I knew was what I learned during my university studies and whatever tutorials I read on the Internet.

It just happened that I got a software developer job at Syneto. They needed someone with strong networking skills. I was open to dive more into software development. I was the perfect match for their requirements at the time. I had no idea how much my life will change in the upcoming years.

Without entering into too many details, I have mention that Syneto went through a huge agile transformation in the two years after I arrived. We learned a lot both as a company and as a team. Throughout this period I learned a lot, read about ten programming books, and applied most of the knowledge on our storage project.

But what is doing good for if you don't share your experience with others? We've got gradually involved in the local agile community in my town, Timisoara. I delivered my first speech at the local community at about two and half years after I started my software development career.

Brian Tracy says you need two years to get local recognition, three-four years to get national recognition, five years to get global recognition.

By the time I had four years experience in software development, I held my first speech at a national software conference. In fact, the conference was international, but held in my country, Romania. I remember how proud I was to be speaking at a conference alongside legendary software developers like Michael Feathers.

In the very next year however, I made the huge leap to speak at the World's largest agile conference, Agile2015, in Washington DC, US. At the time when I spoke in Washington DC, I was hired at Syneto for 5 years, one month, and 3 days. It was only a 30 minutes speech, but nonetheless it was at the highest level, at greatest conference.

Today, I am preparing my second speech for the AgileAlliance organized conference. I will speak at Agile2016, in Atlanta, US. This time however, a full 75 minutes talk for a larger audience.

Check out my session and reserve a seat for July the 27th, Wednesday, at 2PM, in Atlanta, US.

Tuesday, February 3, 2015

2nd of 3 Books That Changed My Life: @ericevans0's Domain Driven Design

I was thinking lately that from all the books I've read related to my professional life and career, there are three that stand out. I can not decide which one had a bigger impact because each effected a different part of my life. So there is not one better than the other. I will write three blog posts about each book. They will be presented in the chronological order I read them.


One of the most difficult to read books, and still one of the most enlightening ones, Domain Driven Design by Eric Evans is second on my list of three books that had a major impact on my professional life.

This one is a book that takes software development to a totally different level. Seemingly it leaves most technicalities behind and views the whole software from a much higher level.

Imagine your source code as a balloon filled with air. It sits between two major actors of our industry: the software developers on one side, and the business people on the others. Or if you take the people out of the picture, software production versus business domain.

In such a setting, Domain Driven Design pulls a part of the balloon toward the business people, toward the domain, while at the same time anchoring its other side in the software production department. It tries to fuse business with software by both pulling simple software concepts like modules, classes, dependencies, functionalities into business, as well as pulling business concepts to the source code.

As a software developer, I was more concerned and intrigued by the introduction of business concepts into the source code. At Syneto we work on Storage OS, an operating system for storage devices. We are both the software developers and the domain experts. So we could not pull software concepts into our domain, we already knew all the programming related concepts. But we could start working toward representing domain concepts in our code.

This had a major impact on our architecture and structure of modules. We started by implementing the Repository Design Pattern learned from Domain Driven Design. This opened up some interesting possibilities. It forced us to have each of our models represent a domain concept. As we mostly work in PHP, our modules are simple directories. Each module represents a domain concept and has a repository. The repository can provide and save objects. It's not a generic ORM though, it is more likely a domain specific query language. And what kind of objects should such a repository provide? Domain objects. These objects represent a more specific part of the domain.

For example, we can have a Network module. In this module we can have several repositories like NetworkAddresses, or HardwareLinks. A NetworkAddress repository can provide NetworkAddress objects. A NetworkAddress object represents a unique combination of IPv4 address, IPv6 address, a subnet mask, and a name. The HardwareLinks repository may provide Link objects. These represent the state of a network link: type - ethernet or fibre channel -, cable plugged or unplugged, link speed, frame sizes, etc. These are value objects, representing state. But we also have entities representing functionality like applying a NetworkAddress to a specific HardwareLink. This will result in a setting on the operating system. This setting will assign the IP address and subnet mask to a network link on a physical network card.

I will stop now and let you read and discover the mysteries of Domain Driven Design.


Read also: 1st of 3 Books That Changed My Life: @unclebobmartin's Agile Principles, Patterns, and Practices in C#

Tuesday, January 20, 2015

Galaxy Note 4 - After the First Eight Weeks

I've got my Galaxy Note 4 delivered about eight weeks ago, on December the 2nd 2014, and I waited some time for the placebo effect to subside before I write about my experience with it. My previous phone was a Galaxy Note 2, so most of my comparison and impressions will be related to that device.

The Exterior

I love how the Note 4 looks. I ordered the bronze gold version. The color you perceive is actually very much influenced by the light conditions. You will see it brown under a 1600W white colored light bulb. You will see it gold, and quite yellowish, under a 60-100W light bulb. You will see it a pinkish-magenta under natural light with clear sky but in the shadows. And it actually look bronze-gold under direct sunlight.

The Note 2's round form never really attracted me. I bought it for the big screen, not for the rounded corners. Note 2 was inspiring a natural object, like a stone, egg, leaf, etc. The Note 4 is a totally different story. The much less rounded corners and 45 degree angled flat edges give the Note 4 a futuristic technology look. It looks like a modern electronic device, not something resembling nature.

But the sharp angles give the Note 4 a big handicap compared to Note 2. It is much harder to fit it into your packet. The Note 2, with its round form, slid into any pocket with ease. I used to keep it in my front packet of my blue jeans and while I was sitting the Note 2 felt comfortable while pressing to my legs. The new Note 4 is much harder to be pushed into the pocket and the right angles produces a discomfort after some time. While having it in my pocket for 10-15 minutes is OK, I wouldn't think of keeping it there for much longer.

When you keep the two phones in your hand, they provide two totally different experiences. The Note 2's sticky, glossy cover and rounded edges encourages you to keep it laid in your palm. As it won't slide out of your hand, you can do this comfortable even at angles larger than 45 degrees. The Note 4 has a very different feeling. Its sharp edges encourages you to grab it by the sides and it will stay in your grip with easy and little effort. The faux leather back slips easier on the skin compared to Note 2, so you won't let this phone just sit in your palm. Which one is better? I don't know. They are two different experiences, I like how each one feels in its own way.

The Hardware

Regardless of the version you choose, the Note 4 is a beast. The Qualcomm CPU is slower, but it has faster 4G. The Exynos CPU is faster but it has slower 4G. As I am mostly using Internet over Wi-Fi, I've chosen the Exynos variant and I am completely amazed. Any game runs very smoothly and loads blazing fast. I didn't play Asphalt 8 on the Note 2, but I play with a friend of mine who has and LG G3. On the Note 4 the game loads about 2 times faster and runs somewhat smoother. Both phones can run the game with amazing graphics at the most demanding settings. I am very pleased with the speed of CPUs & GPU.

Now let's talk about the screen. Some may think that it's too big, but I've never met any person who bought a large screen phone (phablet) and then reverted to a smaller screen on his/her next phone. The colors of the super amoled 2K resolution display are very good and much more natural than the Note 2's screen. But beauty comes with a price. The screen is the largest battery consumer on the Note 4.

The new pen... well, it's shorter than on the Note 2 and it feels a little bit awkward to write with it. I am sure, however that I will get used to it quickly. As a small design element, I liked better how the pen hid in the Note 2. On the Note 4 case, the pen's tail is a very visible element.

The 32GB builtin storage and the 128GB SD card extension slot should be enough for everyone, I can't complain about the space. On the Note 2 I felt the need for some extra space. I had the 16GB variant, and you know 5-6 GB are always reserved by android. I never had more than 8-9 GB usable space for programs and multimedia.

Battery Consumption

I didn't do any particular test, but as with any new phone I used it quite a lot at the beginning. I installed a lot of applications, personalized it, played visually amazing games, read news, mails, chatted with friends and of course I called other people.

The battery did not discard in less than 24 hours, regardless of how I used my phone. As I am getting used with the phone and I have some automation in place that conserves power by turning on Wi-Fi and 4G over night, I am getting more and more hours out of it. My next charge should be at around 35-40 hours from the previous one with the following daily usage: 60 minutes reading stuff, 30-45 minutes playing Asphalt 8, 70-85 minutes 4G and GPS navigation, 16 hours of Wi-Fi, 10 minutes of talking, a couple of SMS, a few Hangout Messages. I am so far very pleased with the battery life and I am surprised that 4G doesn't really matter that much, but I made no extensive testing.


Well, I will let you discover the details. I just say that I love "S Finder", air commands, ScrapBook app, selective screenshots that can be easily stacked and then combined in various apps.

Handwriting recognition got much better. It almost always guesses what I write, very little correction is needed.

The camera and its software are amazing. The downloadable camera modes are a nice touch by Samsung, I love them! Colors are very realistic, the image stabilization works pretty well, pictures at low light are so much better than the Note 2 that I can't even compare them.

My Final Verdict

I love it. Very good phone. A little pricey though.

Tuesday, October 7, 2014

Programmer's Diary: Setting Up a PPTP from CLI on Linux

From time to time I have to set up a PPTP connection to my office, and the KDE GUI fails. So here is a reminder to myself and anyone curious about how to connect to a PPTP VPN.

# pptpsetup --create syneto --server --username csaba --password ******* --encrypt
# pon syneto
# ip route add dev ppp0
# ip route add dev ppp0

Add the DNS from the VPN network and a search domain.
# mcedit /etc/resolv.conf

# cat /etc/resolv.conf

Have fun :)

Thursday, October 2, 2014

1st of 3 Books That Changed My Life: @unclebobmartin's Agile Principles, Patterns, and Practices in C#

I was thinking lately that from all the books I've read related to my professional life and career, there are three that stand out. I can not decide which one had a bigger impact because each effected a different part of my life. So there is not one better than the other. I will write three blog posts about each book. They will be presented in the chronological order I read them.


I started reading Robert C. Martin's Agile Principles, Patterns, and Practices in C# about one year after I started working for Syneto. At that point, I had more than five years of programming experience and I was quite familiar with many concepts.

However, in almost all my career I worked alone. There was no chance to me to interact with other programmers, to find out about new and cool stuff from others directly. I've heard about Agile and Extreme Programming, but when you work alone you see things differently.

I could not find any satisfying online documentation back then, and there was nobody to recommend me the right books to read.

This lone programmer figure had to go under a major rework after I've got to Syneto. Suddenly I was surrounded by programmers with whom I had to collaborate. Fortunately for me, I worked with people for a long time, so the social side of the integration went well. And with social development came teachings and recommendations and a huge flood of information exchange. One of the books recommended both by colleagues and managers was Robert C. Martin's Agile Principles, Patterns, and Practices in C#.

This was not the first book I've read at Syneto. Not even the second one. It was just "the next book to read" on a long list after a year or so of intense personal and professional development. All the previous books were important and had a great impact, but none of them changed more the way I write code than this one.

Because Robert C. Martin's Agile Principles, Patterns, and Practices in C# had a profound impact on how I write code, I nominate this book one of the three life changers.

Before this book I was thinking about the structure of my code in a naive way. I had my personal experience, I heard about and knew a couple of design patterns, I even knew the basics of code structure and form.

So how this book changed the code that I commit to the version control system every day?

  1. My methods are less than 4 lines long. On average they are 2 lines long. Some methods are still huge and they may have 10-15 lines of code. But they are so rare, that they don't affect the statistics very much.
  2. My architecture is decoupled.
  3. My dependencies are inverted.
  4. My classes have a high cohesion. I once actually managed to create a class, together with +Vadim Comanescu, that we considered a perfect class: 6 public method and 6 private variables. All methods were using all private variables.
  5. I made naming things right one of my top priorities. Rarely can I write a method name that is not changed at least 3 times before the code is committed.
  6. I use design patterns in a much better informed fashion. The book helped me understand them better, and especially to understand possible use cases and scenarios.
  7. ... I could continue with other reasons. But I will stop now. I think these alone are enough. No need to write up another ten or so of them.
That is why I consider this book "The Programmer's Bible". Each software developer, regardless of the programming language or paradigm he or she uses, must read this book. It is quite long, about 600 pages, but it is not a difficult read. Robert C. Martin has a great talent to keep you hooked. I remember that some design patterns were so exciting stories that I just could not stop reading.

So, what are you waiting for, find a copy this extraordinary book and read it.

Sunday, September 28, 2014

Programmer's Diary: Constructing your Tests Line by Line

It is a different thought process for everyone, but when I write the tests that will represent the functionality I am about to implement, I always start with the Exercise or Act part.

A unit test is usually composed of three or four parts, thus the rule of 4As

1. Setup or Arrange
2. Exercise or Act
3. Verify or Assert
4. Tear down or Annihilate (this may be missing, automatic garbage collection, anyone?)

I observed that people who know about these parts have the natural tendency to start writing a test in that exact order. They start by asking themselves "What do I need?" and only then "What do I do?". This frequently leads to dilemmas that can not  be answered, and they just give up writing the test and start writing the production code.

In my opinion this type of thinking has a fundamental flaw. You cannot know what do you need before you first figure out what do you want to do. That is why I always start with 2. Exercise or Act. And my second step is always 3. Verify or Assert. This way I can put down the basis of the test by clearly defining what I want to do and what results I am expecting.

I build the 1. Setup or Arrange part as an iterative process by adding all the required dependencies for the already defined lines. Finally I do 4. Tear down or Annihilate to do the opposite of setup if needed.

1. Write a new test function and name it by the behavior you want to test.

function testItCanAddTwoNumbers() {


2. Act! Do the behavior you just defined in the test's name.

function testItCanAddTwoNumbers() {
    $sum = $calculator->add($n1, $n2);

3. Assert.

function testItCanAddTwoNumbers() {
    $actualSum = $calculator->add($n1, $n2);
    $this->assertEquals($expectedSum, $actualSum);

4. Arrange, or prepare all the missing parts.

function testItCanAddTwoNumbers() {
    $calculator = new Calculator();
    $n1 = 1;
    $n2 = 2;
    $expectedSum = 3;

    $actualSum = $calculator->add($n1, $n2);
    $this->assertEquals($expectedSum, $actualSum);

5. Annihilate, or destroy persistent information. Nothing to be done for this part here.

That's it. Have fun writing tests instead of hitting a brick wall with your head!

Saturday, August 30, 2014

Programmer's Diary: Finding Your Ways

I usually tend to give any advice with a pinch of salt. In one of my tutorials about SOLID I wrote the following phrase: "As with any other principle, try not to think about everything from before."

Which led to some dilemmas with a few of my readers, especially because it was in the Interface Segregation Principle article. I answered the questions of the reader, but I think the ideas there merit a blog post. So, here it is. Read on.

Any exaggeration is bad. If you think about everything upfront, it is bad. If you think about nothing upfront, it is also bad. Finding the right balance in what to do and what to postpone or not to do at all is essential for every project. There is no universal theorem or solution. There are however some recommendations that try to keep us on the right track.

In Agile software development, you will mostly meet tow concept. Each of them is pulling back you from one of the two extremes mentioned above.

1) Postpone everything to the last responsible moment. If you apply this, you may ask yourself every time you make an interface: should I create the interface? Will there be more than one implementation? If yes, what and when that implementation will be? Is it more expensive for me to delay the release by 2 hours and implement the interfaces now, or it is more expansive to not write any interface and introduce them on the next release when I know I will need them? How sure I am that I will need the interface on the next release? Can the plan be changed, outside of my control, so that I end up with a code that will never be relevant?

2) Program with change in mind. If not from the first release, then from the second. If you needed to change a specific piece of code, than there is quite a big chance you will need to change it again. On your first change, make it so that your third, forth, and so on, changes will be easy. If you see some code, once written and never modified, and you have no reason to change it, don't.

Basically this is it. Now you may ask yourself how to deal with these problems? You have three possible ways to go on:

1) Take the postpone extreme. Postpone everything, until you feel it starts to heart. Than, gradually try to think a little bit ahead and don't postpone things quite as much. This is how we at Syneto started.

2) Take the plan for everything extreme, and evolve from there. This is actually one of the routes many people take when coming from Waterfall. Gradually try to identify parts that take up a long time in planning but proved to be marginally important. Continue doing so until you feel pleased with your process, and you don't feel that what you do will never be used or useful.

3) Take the middle road. This may sound attractively optimal, but it is not. I don't think any project is right in the middle between the extremes. You can take the middle road, and continually think about both extremes. With time you will find toward which end your project requires more attention.

Sunday, July 20, 2014

Watch, Learn, Do, Decide

I have this concept whenever I need to decide if something new is good or not for me.

First I watch or read about the idea.  Then I study more about it. Then I do it for a relatively long time. Then I decide if it is good or not for me, that I should drop it all or I can adopt parts of it in my life.

This applies exceptionally well to new programming techniques.

At work, at Syneto, we usually do things for about 6 months before we decide.  But those are big things.  They affect a bunch of people.

In my personal life I scale down. Both the discovered things and the time for doing. Still, I always make sure I don't decide too early.

Recently I was invited to a new developers forum in my country. And after just one day,  I am amazed how many people jump over the learn and do part. They only watch and decide.

I believe the only way to decide upon a thing is by past experience. But you need to build that experience yourself. You can't avoid it, at least not for a long time.

Sunday, May 18, 2014

Belgrade CityBreak: An Unexpected Journey

My wife and I had an unplanned opportunity to visit Belgrade for the first time. It went pretty well.

We were asked to drive two of my colleagues to the Belgrade airport from where they took a plane to Paris. This trip allowed us to stop in Belgrade and visit the city. We had no plans, no knowledge about the city. I just set "Belgrade City Center" in the GPS and let it drive us ... somewhere.

First of all, parking your car in Belgrade is extremely difficult. We almost gave up after 30 minutes of randomly choosing streets in the central area and trying to find a spot to stop. Finally, we managed to park, at about 2.5 kms away from the point marked as city center on the GPS. Well, a 20 minutes walk should not be that much. But we were so hungry, and finding a restaurant was a bigger than expected challenge. We did not know the city, but based on the look of the streets and shops, we were somewhere close the center. There were even quite a lot terasses, but only coffe and drinks served. Where are the restaurants?

After trying several alleys that seemed promising, and having no luck at all finding a restaurant, we went on on the main street and finally ended up in the pedestrian area. At least finding a restaurant there was not a challenge any more. We ate at a random restaurant called Opera. They had good food and decent prices. One starter, two main courses, some mineral water and two coffees = 40 Euro.

After we ate, and with our bellies full, we decided that it was a really good time to just walk and admire the city and whatever surprises it may hide. The weather was also a good company, about 25 degrees, mostly sunny. Luckily the restaurant had free WiFi, so we had a chance to look up the surrounding attractions on TripAdvisor. Choosing our next stop was simple. The old City Fortress was just a few minutes away.

What we didn't expect is it to be so well preserved, free to visit, and really impressive. It is bigger than you may think at first sight and spending an hour or so just by walking around the old streets, walls, is not even enough. There is also a great public park surrounding the whole fortress. You can relax on a bench, walk around in a well maintained garden, do some sports, or just stop for a coffee at the Danube's bank.

An expo with first and second world war military equipment was just an amazing plus for this visit. So it's time to wrap up some pros and cons.


  • Mixed architecture - there were streets on which you could recognize 4-5 types of different architecture from different eras. From a princesses house, through a peasant's house and a communist office building to victorian architecture. All you could imagine on a single street. There were also places with fluent uniform nice architecture.
  • Food was good - even though we have chosen a restaurant at random and we ordered Serbian specialties we never ate before, we liked the food.
  • People are friendly - we felt the local people friendly, quiet, helpful.

  • Difficult to find a restaurant - unless you are in the very city center, on the pedestrian area, even a McDonalds or other fast-food is hard to find. You can get coffee and drinks, but now food.
  • Difficult to find a mini-market - on the whole 2-3 km walk from the car to the city center and back, we found a single mini-market to by some mineral water and cigarettes. Yes, there are kiosks here-and-there, but paying with your credit card is not an option there.
  • Traffic is quite intense - even though it was Sunday, there was quite heavy traffic in the city. Where did all that people had to go by car on Sunday? I can't understand...

That's it. Thanks for reading.

Friday, May 2, 2014

Agile by Instinct

There is a question on my mind for some time now. An idea, a thing that just can't let me alone.

What do you do after you tried all agile practices?

I had the opportunity to work for a company that went through a great deal of change by giving up an old-style waterfall oriented management and adopting agile. But what adopting agile actually means?

As any company and team we started by learning new techniques and practices. We started to plan our work on a board and we did a group-reading marathon of Gerard Maszaros' xUnit Patterns book. This was about 4-5 years ago, and it was enough to rise our interest in all these new things. We went on and adopted TDD and we still use it at a daily bases. We redesigned our architecture so that our business logic is isolated from the rest of the system as Robert C. Martin recommends in his clean architecture concepts.

We implemented a continuous integration and deployment system for our project, we covered most of our code by tests, we even optimized the whole deployment process to an extent that it takes about four and a half minutes to run all the 6000+ assertions in our unit tests, all the MVC framework's controller, helper and model tests (these are just a few, but still), compile and encode everything, crate packages and publish them on an update server. I think we have a process that is quite optimized. Even though there may be small changes to make, there will be no more significant gains.

And our everyday software development process? Well, after doing SCRUM for a while we tried Lean with Kanban. From all of them we devised the parts that can the most help our processes. There is not really any other formalized process we could try and fit in our management structure.

Continuous learning and deliberate discovery are another two things we do frequently. We, as professionals try to make ourselves better, each day, every day. We do courses, we practice at home, we attend conferences, we organize events, and so on.

"It sounds like a success story" as Dan North remarked it when I was talking with him about this topic. But what do we do next? What is the next thing we can try to make our process better, to go faster.

An interesting question Dan North asked me, and I was quite surprised by it, was "What makes you think you can go faster or better? Maybe you reached your maximum speed." (approximate quote). I couldn't answer him then. In retrospective that is because I have no rational reason to sustain my desire to go faster and better. But my instincts tell me we can do better. My professionalism tells me I can learn more and take better decisions. I am asking myself instead "Why should we ever stop getting faster and better?" Of course there is no magic answer. If there would be, it would be a formalized practice or technique, and this blog post would not exist.

For the time being I feel we are far from perfect. In the past year or so we tried to orient our attention more toward our clients. We tried and successfully listened to other departments. Now we are on our path to create a better synergy between dev, sales, operations, and marketing. And this is why Dan North's suggestion surprised me most. He suggested the exact same thing.

So, after you go through all the practices and techniques of agile development and you make them work for you, you must start being truly agile.

Being agile is not about adopting rules and practices. Being agile is not even about learning and devising your best way to work based on those processes.

Being agile is to learn, as a team, as a company, to follow your instinct in order to value individuals and interactions, to create working software, to listen to your customers and to respond to their needs as quickly as possible.

Agile is about us making efforts so that others doesn't have to.

Monday, April 28, 2014

#CraftConf Budapest 2014. A Big Wow!

At the end of April 2014 I went to a conference. CraftConf Budapest. We had no idea how many attendees will there be or how big the event will be, but one thing was sure, there will be a panel I've never seen in Europe at any conference before. There were so many famous people invited to speak that the event became a must-go both for me and for my colleagues.

We will write a more extensive blog post on Syneto's blog, so here I will present only my personal impressions.

First impression: This is a huge event!
I never attended such a big conference. There were more than 900 attendees and the main room had 5 screens, the big one, in the centre on the image above was 10 meters or so in diagonal. They also managed to secure some very wealthy sponsors who kept our bellies full and prevented our mouth to dry out. The other 2 rooms were smaller, but still impressive.

Second impression: There at most 5% of new information in a talk.
For whatever reason I had huge expectations from this conference. However I have to realize that a talk can not contain more than 5% new and useful information. At least not for me and my colleagues. This was a hard thing to realize, but it led to the next one.

Third impression: The value of a conference is in the chance of speaking with famous people.
Yes. You need some guts, but if you want real value for the money you payed for the conference, you must go and talk with those important people. For some I had questions in order to obtain new information, for others just to confirm some of my own ideas and perceptions and for other I actually managed to provide constructive feedback.

All in all I talked with more famous people in 2 days than in my whole life altogether. So I thank you Bruce Eckel, Dan North, Eric Evens, Gerard Meszaros, Theo Schlossnagle, John Hughes, Simon Brown for your time and for every other speaker for their great talks.

Monday, March 24, 2014

I Don't Believe in Genetically Born Leaders

I hear so many times that some are made to lead, while others to follow. And while there may be some truth in that statement I don't believe someone can born to be a leader. I believe in discipline. I believe in hard work. I believe in fulfilling of dreams. I believe any of us can lead or can be led. I believe it is ultimately our choice. But how is that possible? Don't we have different personalities? Don't we have different professional objectives? Don't we have different dreams? Don't we born and live in different societies? Sure, we have, we are, we do. Than how could any of us become a leader or a follower? Well, society, family, and friends have a great impact, but at the end of the day it's up to you what to choose to do. Some choose to follow and be happy. Others choose to lead. Others try to find the balance between the two. I believe when there is someone to follow, you should do so. However, when there are some to lead, you should do so again. There is no way you can't be leaders for some and follow others. This is the only natural situation you can be in. There will always be things to learn from those smarter and wiser than you, and there almost always will be others willing to learn from you. If you only want to be a follower, you will never feel the appreciation and amazement of other young minds discovering your secrets. If you only lead, you will burn out very quickly. Your students and followers, if balanced between the two characters, will simply become smarter and wiser than you and will become leaders instead of you. That's why the most depressive persons I ever seen were followers without will to lead, or fallen leaders without hope to rise again.

Wednesday, February 19, 2014

Programmer's Diary: Transforming PHP Objects to Strings

In my upcoming programming course for +Nettuts+  I will implement a persistence layer for the application developed throughout the course. For the sake of simplicity I decided to make it a file persistence and keep information in plain text. This was a good occasion to use a nice PHP trick to convert simple objects into plain text.

Our objects are books and besides the fact that there is an abstract book class there are a lot of specific implementations for different kind of books, like novels. Each is a little different than the other, so saving all the books in the same text format would have been impossible.

PHP offers a magic method called __toString(). Creating an implementation for this method, on any object, will allow you to use that object in a string context. Let's see a basic example.

class SystemInformation {

 private $cpu;
 private $ram;

 function __construct($cpu, $ram) {
  $this->cpu = $cpu;
  $this->ram = $ram;

 function __toString() {
  return "CPU: " . $this->cpu . "%" .
        "\nMemory: " . $this->ram . "MB";

If we create such an object and try to run it in a string context, like in echo(), it will be automatically converted to string, using whatever we return in __toString().

$sysInfo = new SystemInformation(40,1024);
echo $sysInfo;

This will output:

CPU: 40%
Memory: 1024MB

You can also use it in some other contexts also, for example this test will pass just fine:

$this->assertTrue(strpos($sysInfo, 'CPU') !== false);

And when PHP is not smart enough to figure out that you want the object as string, you can always call __toString() on it directly.

$this->assertRegExp('/CPU/', $sysInfo->__toString());

For the complete example with the whole application I mentioned at the beginning, keep an eye on the +Nettuts+ premium courses. Have a nice day of programming.

Wednesday, February 5, 2014

The advantages of working for 2 companies at the same time

Many companies deny their employees to have a second workplace, or work as freelancers in the professional domain as their main job. Other corporations have an approval procedure,  and each employee must declare any other job he or she wants to take. If the company decides that the job may conflict with its interests, it may deny the employee to accept it.
What companies rarely consider are the reciprocal benefits. I had 2 workplaces almost all my career. Right now I work for Syneto and in my free time I write for NetTuts. This is great for everyone. I can write great articles based on my experience at Syneto. Syneto benefits of me becoming a better programmer with each article or course I make. Explaining my ideas greatly improves my knowledge on that specific domain because I need to dive into details about it.
So, I learn more and better, Syneto gets better code and NetTuts better articles.

Everyone wins.

Sunday, February 2, 2014

Programmer's Diary: Writing a Series for @NetTuts

If you are following me on twitter you probably know I am a regular technical writer for +Nettuts+ . I write various tutorials and articles on programming topics. However I never wrote a series with tutorials that are connected in one way or another.

That changed with the series on the SOLID principles. I had to write four articles covering five principles and I found out there are quite a few challenges when writing a series of articles.

Challenge #1 - The first article must be good. Much better than any of my stand-alone articles, because it must convince any reader, new or regular, that the upcoming articles in the series deserve their attention. The first article has the stake of the whole series. If it fails, there is a chance the rest will never be read, doesn't matter how well written it may be.

Challenge #2 - Each article must find a way to refer to, to connect with the previous articles or at least with some of them so that the readers have a feeling of continuity. In each of my SOLID articles I referred to at least one, but preferably two other SOLID principles. Now, this is again tricky. Because the articles are published and read in sequence, from S to D, even though O may relate to I or D, in the article itself I can only refer to S because the reader may not know about the LID principles. If I refer to any of LID I risk one of two things: confusing the reader and loosing him/her, making the reader curious and making him/her read the three LID principles from other sources and not read my upcoming three articles.

Challenge #3 - Each article must be self contained, in a sense that a new reader must be able to comprehend it without reading any of the preceding or upcoming articles in the series.

Challenge #4 - Each article must provide something different so that the readers don't get bored. If one article explained the concepts with mostly text in anecdotal manner, the next one must use a different approach, maybe more schemas or more source code or more quotes of rules and definitions or more funny statements. It doesn't really matter what, but it must be different, it must be unique in a way to disrupt the monotony of the series and still provide the valuable information it proposed as its topic.

Challenge #5 - The last article must contain a conclusion to the whole series. It must be written in a way that not only transfers the last topic in the series, but also connects all the dots and provides a high level view over all the topics presented throughout each tutorial. It also has to put everything in perspective, under a different light, in a different - bigger - universe, where the whole series is just a small piece of the puzzle.

That's why I concluded my series about the SOLID principles with a reference to The Magical Number Seven, Plus or Minus Two and that is why there are 5 challenges in this blog post.

Sunday, January 19, 2014

Programmer's Diary: Mistakes Will Come Bite You in The Butt

Last week I was doing a quick refactoring of our virtualization module at work. We are about to introduce some new concepts in it and a reorganization of the directory structure was due.

However, this being a quite mature project it was started before the dawn of namespaces in PHP and the KVM module was also written in that fashion. As we thrive for perfectness whenever we can, moving the module to namespaces while changing the directory structure was the obvious choice.

Introducing namespacing into a module of about 20 or so classes was not a big deal. Tests covered most of it, find and replace worked like a charm and the PHP analyzer in PHPStorm highlighted a the few spots missed by the former two actions.

In less then a full work day the whole module, at the business logic level, was up and running using namespaces and the new directory structure. Long live TDD and good test coverage, it would have taken several days or maybe weeks to change all that without tests, but that's another story, for another time.

Then my colleague +Vadim Comanescu  offered to help with the update of the web interface. As our business logic is totally separated from our user interface, the changes should have been quite localized. And they were, for the most of it. The change went on smoothly, except one little mistake - or laziness for that matter - we made a few months ago. 

A rogue method in a view helper implemented something like "isCurrentPathForVM($path)" At first sight it was acceptable. It accessed the virtual machine inventory class Kvm_VmInventory, retrieved all the virtual machines with its "findAll()" method and did a quick foreach() on it asking each virtual machine if it belongs or not to the provided path.

Helper -> Inventory -> Virtual Machine. A simple dependency chain involving only 3 classes. Shouldn't be a problem, right?

Then it struck us! Why modifying the public interface of the Inventory class affects a view helper? Why the internal class structure of the KVM module is leaked to the UI? Why a helper attached to a view must know the working details of an class buried deep inside the module? Don't we have a Facade already written for that?

So many question had been risen in an instance. Clearly something was wrong.

A view helper should use only information available to the view and it should only do operations related to presentation. Yes, it should contain logic, but only presentation logic. Was finding out that the current path belongs or not to a virtual machine presentation logic? It turns out it was not.

To start answering our so many questions we decided the helper must not know about the inner classes of the KVM module. However it needs to ask the module for the information it requires: Does the current path belong or not to a virtual machine. If yes, the helper will instruct the view to draw a little computer instead of the folder icon at the left of the path.

To do so the helper need access to the facade. But should it create an instance of the facade? No, it shouldn't. There is already one available created by the Kvm model. But accessing a model from a view helper is again tricky and wrong. The helper needs to take it's parameters from the view.

The view's parameters are defined by the controller. That's great, we have the missing piece in the chain.

Helper -> View -> Controller -> Model -> Facade -> Inventory -> Virtual Machine. Here is our new dependency chain. Much longer, but also way more decoupled and correct. The model provides the facade to the controller and passes it to the view, subsequently letting the helper ask for the question: is the current path for a virtual machine? Which made us realize a distant code duplication was also present. The whole logic of the original method was already implemented in the facade. 

We were pleased to also a remove a code duplication that lingered hidden in the dependency chain.

Finally we moved the logic from the facade to the Inventory. Facade's should have no logic, they should only be a common entry point to the module providing an easy to understand specialized public interface to any client.

All that done, Vadim and I looked at each other, and he said: "There was not one instance when a mistake or laziness did not come and bite us in the butt afterwards!"

We at Syneto always try to make our code better. We are proud of our accomplishments and we are not afraid to acknowledge our mistakes.

Disclaimer: Code descriptions, method names, functionality details are approximate in order to protect our project.

Thursday, December 26, 2013

End-of-Year Review: Economic Flashbacks

At the end of each year we tend look back and analyze what has happened. In this post I will rent about economics and the financial crisis again.

It has been another year when all over the news, and in society in general, the financial crisis was a constant topic. We were, again, told that even though things seems to be going a little better we are still in a crisis and we, as a society, have to endure and support each other and accept low salary, extra hours, no Christmas bonuses, and so on.

Fortunately for me, I work for an open minded and realistic company. The rules are simple, if the company does well, the employees do well. If the company doesn't do well, we, the employees will do our best to make it better. However this is not the case for most of the companies in my country, that is Romania.

In my personal opinion there was never a crisis in Romania. At least not directly. We felt the effects of the crisis as exports dropped and banks decided to not loan people and divert all the incomes to their mother countries. But that's it. There were no huge surprises or bankruptcies. There were no big companies closed and unemployment is kept in check at one of the lowest levels in Europe.

However, the curtain of the phrase "We don't have money, it's a crisis!" allowed a lot of companies and politicians to hide behind those words and affect a huge part of the country's population. We knew politicians were stealing huge amount of money, but simple citizens and the media could do little. The president itself may very well be the heart of groups making money end up in the pockets of well placed, influential people. Of course, there are little or no proof, but that's how things work. Take for example the 10 billion dollars payed by the state for a highway that was never built and the money never recovered. Money doesn't evaporate.

So, in the early years of the crisis the government, back then controlled by the democrats, decided to reduce all the salaries and even pensions (though illegally). There was less money coming in to the state's treasury since less exports and external investments reduced the growth we were used to. We, the citizens, thought it will make the politicians steal less. It did not. They stolen probably even more, thinking about the dark times to come, and took away from the people. Well, they payed the price at last year's election, finally they were gone.

2013 was the first full year with a social-liberal-democratic government. An unlikely alliance with one goal, get rid of the democrats and the actual president. They managed the former, unfortunately not the later. I am sure they are also money hungry, and making their part in all of the state's businesses, but at least we stopped hearing about briberies like 40% of a 1 billion dollars highway must go to certain people otherwise the project is assigned to another company willing to pay. We also could observe some legal changes toward a better direction. We've seen that most of the salaries and pensions were reintegrated to the values they had back in 2010 when they were reduced. Some of the illegal cuts were also payed back retroactively.

We also registered a 3-4% economic growth. A very nice figure in my opinion. And an absorption of about 30% of the European funds, compared to about 5% in 2012. A lot of money.

Even though I still do not agree with at least half of the government's actions, at least I can see some of them are good and they are starting to produce positive results.

The private sector is a totally different story. Private companies can do whatever they want and believe me, they are doing huge profits, hidden under the curtain of the crisis slogan. About 40-45% of all the employees in the private sector are working the minimum salary imposed by the law. That is about $250 per month. And from that you have to subtract about 20-30% of different taxes. At the other end of the scale, there are only 2.5% that are earning more than twice the average monthly salary, which is somewhere at 500-600 dollars.

Most employees are kept in the dark, and if they dare to ask for a rise they are simply told "It's a crisis.". Meanwhile many companies are reporting record profits. The people, a huge part of the country's working force, thinks there is no other way, they have to work for the minimum wage. They gave up ... unfortunately.

And the banks are part of all this. They are in no trouble, at least not here in Romania. They probably never were. They are doing just fine. In fact I think they feel very well. All those house loans given before the house crisis are paying off. People continue to pay them. And while in many countries the interest rates were reduced to almost zero, they stayed more or less the same here, except the effects of the mandatory EURIBOR or ROBOR interest rates, which were reduced. But the banks are actually taking more than before. For example, when I bought my  house I payed about 5% EURIBOR + 3% the banks interest rate. Now, EURIBOR 0.3% and the bank's rate is about 4-6%. Yes, I pay less than in 2008, but the bank is getting more.

So, the banks are getting richer and more wealthy while telling us that we should be happy we pay less then before ... sure.

And the proof? Simple. Basically you can not get loans now. The banks are giving only very few loans or only those that are guaranteed by the state. For example if you want to by a car and you have a salary in the top 2.5% of the incomes, you have little chance to buy anything but the cheapest cars out there (I refer to new cars). It doesn't matter you could easily pay the car in less then 2 years, they will not risk it. Why should they? They can sit back, do nothing for the next 20-30 years while all those huge house loans will be payed. Good business. They are still supporting small credits, 200-300 dollars if you want refrigerator or a new TV or a new phone, but basically that's it. And they are doing that because the risk basically is zero. If you don't pay, the law permits them to take 30% or so of your salary and they will, in a couple of years, get the money back. And if they wont, they've lost almost nothing anyway.

So, it is so sad when you can see through the curtain of the crisis and see how so many people are exploited unjustly and kept working for minimum wage just because there is a fictional crisis out there...

Friday, December 20, 2013

Programmer's Diary: The Bluetooth Hell Continues on Linux

In my previous post I described an elaborate way to configure your bluetooth device on the badly behaving Bluez and newer kernels. Well, things seems to be changing again.

My distribution, Sabayon, pushed yesterday a newer pre-release of Bluez 5.... something. Now pairing actually works from KDE, however the bluetooth service segfaults when I connect my keyboard. Mouse connects though without a problem.

What is even funnier is that even though the service goes bye-bye, and I can not remove or add devices, once connected both mouse and keyboard works, with the service crashed!

csaba ~ # systemctl status bluetooth
bluetooth.service - Bluetooth service
   Loaded: loaded (/usr/lib64/systemd/system/bluetooth.service; enabled)
   Active: failed (Result: signal) since Fri 2013-12-20 19:08:15 EET; 14s ago
     Docs: man:bluetoothd(8)
  Process: 20461 ExecStart=/usr/libexec/bluetooth/bluetoothd (code=killed, signal=SEGV)
   Status: "Running"

Dec 20 19:08:01 csaba bluetoothd[20461]: Bluetooth daemon 5.12
Dec 20 19:08:01 csaba bluetoothd[20461]: Starting SDP server
Dec 20 19:08:01 csaba systemd[1]: Started Bluetooth service.
Dec 20 19:08:01 csaba bluetoothd[20461]: Bluetooth management interface 1.3 initialized
Dec 20 19:08:15 csaba systemd[1]: bluetooth.service: main process exited, code=killed, status=11/SEGV
Dec 20 19:08:15 csaba systemd[1]: Unit bluetooth.service entered failed state.

So it seems to be no problem, right? If the service crashes after my devices manage to connect, all I have to be careful about is to first connect my mouse, and only then my keyboard. Wrong! Bluetooth devices have this habit of entering in a sleep mode after a few minutes, so they reconnect automatically after an idle period of time when moved or touched. I must have my service up and running at that point.

In lack of a better idea ... I mean in lack of a lot of free time, because ideas I have plenty, here is a one liner crontab entry that will issue a start to your bluetooth daemon. Just put it in /etc/crontab

* * * * *       root    systemctl start bluetooth

If the service is already running, start will do nothing, if it does not, it will start it. Still, I have to remember to enable my mouse first, but in the worst case scenario I will need to wait one minute to connect the keyboard... or grab my other, non-bluetooth, keyboard and restart the service.

Monday, December 16, 2013

Linux Tip: How to Pair your Bluetooth Device when Using Bluez 5.x and Kernel 3.11-12

It seems like there are problems with Bluez 5.x and newer kernels, especially post 3.10 ones. Since 3.13 is not yet out, these tips apply to kernel 3.11 and 3.12.

I specifically tested and applied the solution below to pair my Microsoft keyboard and mouse. My specifications are as follows:

[bluetooth]# version
Version 5.10
csaba ~ # uname -a
Linux csaba 3.12.0-sabayon #1 SMP Tue Dec 3 15:10:14 UTC 2013 x86_64 Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz GenuineIntel GNU/Linux

So if you were looking for things like:
I can't pair my bluetooth keyboard on Linux
My bluetooth mouse won't work on newer kernel
My bluetooth headset pairs but doesn't connect
My phone can be discovered but not connected or paired over bluetooth
... then here is how to do it with bluetoothctl.

csaba ~ # bluetoothctl 
[NEW] Controller 00:15:83:3D:0A:57 csaba-0 [default]
[NEW] Device 7C:1E:52:A8:47:74 Microsoft Bluetooth Mobile Keyboard 6000

Start bluetoothctl. Mine found the keyboard automatically with it set in discoverable mode. However, it is possible you will need to enable scanning. Just write:

[bluetooth]# scan on

If it is not showing up at this point, you may have problems with your bluetooth receiver or the device is not set in discoverable mode (with that "connect" button pressed).

After my device was found I wen on immediately and tried to pair it. However, it asked for no PIN or anything, it just timed out with an error that I failed to provide correct PIN code. This made me think, maybe I missed something. And I did.

[bluetooth]# default-agent
No agent is registered

There was no default-agent. Now, I do not exactly know what these agents are, so if you know, feel free to comment below with details. However we can easily start one.

[bluetooth]# agent on
Agent registered

And we try to pair now...

[bluetooth]# pair 7C:1E:52:A8:47:74
Attempting to pair with 7C:1E:52:A8:47:74
[CHG] Device 7C:1E:52:A8:47:74 Connected: yes
[agent] PIN code: 241178
[CHG] Device 7C:1E:52:A8:47:74 Modalias: usb:v045Ep0762d0013
[CHG] Device 7C:1E:52:A8:47:74 Modalias: usb:v045Ep0762d0013
[CHG] Device 7C:1E:52:A8:47:74 UUIDs has unsupported type
[CHG] Device 7C:1E:52:A8:47:74 Paired: yes
Pairing successful
[CHG] Device 7C:1E:52:A8:47:74 Connected: no

So we are halfway there. Paired, but not yet connected. I also noted at this point, that the "connecting" LEDs on my devices were still blinking. So the keyboard and mouse did not yet know about the computer. But I could see them...

[bluetooth]# info 7C:1E:52:A8:47:74
Device 7C:1E:52:A8:47:74
        Name: Microsoft Bluetooth Mobile Keyboard 6000
        Alias: Microsoft Bluetooth Mobile Keyboard 6000
        Class: 0x002540
        Icon: input-keyboard
        Paired: yes
        Trusted: no
        Blocked: no
        Connected: no
        LegacyPairing: yes
        UUID: Service Discovery Serve.. (00001000-0000-1000-8000-00805f9b34fb)
        UUID: Human Interface Device... (00001124-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v045Ep0762d0013

So I went on and trusted the device.

[bluetooth]# trust 7C:1E:52:A8:47:74
[CHG] Device 7C:1E:52:A8:47:74 Trusted: yes
Changing 7C:1E:52:A8:47:74 trust succeeded

Then connecting worked. The blinking LEDs turned off. Great. And the connection remained ON.

[bluetooth]# connect 7C:1E:52:A8:47:74
Attempting to connect to 7C:1E:52:A8:47:74
[CHG] Device 7C:1E:52:A8:47:74 Connected: yes
Connection successful
[CHG] Device 7C:1E:52:A8:47:74 Modalias: usb:v045Ep0762d0013
[CHG] Device 7C:1E:52:A8:47:74 Modalias: usb:v045Ep0762d0013

I did reboot my computer and the settings are persistent. I know these devices are working correctly on my computer, I had them paired and working for the past year or so. In fact, if you had them paired before you upgraded to 3.11-12 kernels, you will have them working. Only pairing fails.

I could not find out a way to make pairing work from KDE or other desktop environments.

I hope I could help some of you, and save you some precious time.

Have a nice day.