When I was a kid growing up in Arizona, I used to spend most of my time playing with the other kids that lived up the street. We would do the things that young American boys do: beat each other up, call each other names, and perform really dangerous tricks with our bikes. This was life before the discovery of girls.
One summer, one of the kids found the site of a freshly demolished building, which had been recently bulldozed. There were several heaps of rubble and huge mounds of dirt; perfect for jumping. When the friend came back and told us all about the site, we quickly jumped on our bikes, ready to go; all except one kid, who had a strange bulge in his bike tire. Knowing I could probably fix the problem, or at least ask my Dad to help me fix it, I jumped on his bike and quickly rode it home.
I dug around in Dad's toolbox and went to work on taking the bike tire apart with a screwdriver. After hearing the noise, my Dad came outside to see what I was doing. Upon discovering that I was trying to fix the tire, Dad said to me, "Make sure you fix it right the first time son. Otherwise, you'll look like a donkey."
Those words come back to me when I look at the direction Linux security is taking.
Recently, a post to the Bugtraq mailing list by security researcher Zaraza reminded the community of a problem in inetd, the Internet Super-Server. Inetd times out after receiving a large amount of connections within one minute, and refuses all connections for ten minutes afterwards.
The Linux community
should participate in the Trusted Computing Group... or start its own.
This isn't a new problem. Daniel J. Bernstein (DJB), the University of Illinois at Chicago professor known for his venomous tirades and clever coding solutions, stepped up to solve this issue years ago by creating a special software package called ucspi-tcp, consisting of tcpserver and tcpclient. True to DJB form, he distributes the package through a Web page that details exactly how crappy inetd is, and why ucspi-tcp is better.
True to the UNIX way, another program that performs a specific task very well was designed and released, rather than depending on the application itself or lower-level system internals. DJB just introduced another program with an entirely new set of limitations, when programmers should move to designing applications that are network-aware. In essence, a hack was resolved with another hack.
The modular kernel in UNIX systems is another example of a hacked solution. When implemented, it had the best of intentions. It was designed to conserve kernel memory by using modular pieces of code that can be inserted as needed, and removed, while the system continues to run. In theory, this is a great idea.
However, modular code has one big drawback. It is possible for the administrative user to load modules on the fly in kernels configured to support it. This sounds a lot like a feature, until one considers a person without legitimate administrative access: If a person of questionable integrity can load a module, the system can be made to lie at its lowest level.
Trustworthy Linux
A few different groups of people recognized this problem. For example, those in the Solaris community decided that the best way to handle this is to use Role-Based Access Control to strip the administrative user of the ability to load kernel modules. This functionality could also be extended to other parts of the system as well.
In the Linux community, the general consensus was that this issue is best dealt with by creating a static kernel without loadable modules. Remove the functionality, and you remove the potential abuse.
But this was just another new hurdle on the obstacle course, and it was only a matter of time before someone cleared it: in this case a hacker named Silvio Cesare, who proved with an alarming degree of success that one can patch a statically compiled kernel in memory. As time progresses, this will probably evolve into the standard means of putting a backdoor in a Linux system.
The solution to this problem exists in hardware, not software. The Linux community needs to participate in the Trusted Computing Group, an industry standards body comprised of several big technology players working to create secure computers from the hardware level up.
Alternatively, the community needs to start it's own body to solve the problem at the hardware level. Instead, we will probably see another hack.
My Dad could teach the community a lesson
Hacked solutions often end up providing a temporary fix to the problem. And they almost never stand the test of time.
While we use crafty marketing icons to signify our operating system choice -- a penguin or a devil or whatever -- looking around at the solutions we put together, I cannot help but mentally picture our icon as a gray, big eared, bucking and braying jackass.
It is, essentially, an arms race of hacking. We hack together something; the solution gets hacked. Rather than thinking through the solution, and correctly fixing the problem, we hack together another solution. Which gets hacked yet again. We need to stop hacking, and start rebuilding correctly.
Otherwise, we'll all look like donkeys.
source:
security focus