Friday, 26 February 2016

Why touch-tone phones needed both # and *

I was curious about finding the answer to this rather basic (but just as inconsequential), question (also an obsolete one, in this day of smartphones). At any rate, I was only able to arrive at an answer by piecing together information from a variety of sources. So, to make things simpler for anyone else who ever wonders the same thing, I'm putting up my answer here. It's a deduction, so I may be wrong, but it makes enough sense that I'm fairly confident in it.

The first thing to be clear about is the purpose of the two buttons, which we so rarely use. Interestingly, the engineers who designed them weren't entirely sure what they would be used for, either, but they could anticipate that they might be necessary for some future type of telephone-computer interaction, and toured a number of businesses canvassing for ideas about what applications they might develop and what they would need. They did know that the phone company would be implementing vertical service codes, which were distinct from phone calls and would require a special prefix. They could have defined a reserved numeric prefix (like the later infamous 10 10 numbers), but using the * key turned out to be more pragmatic. (I still remember the *67 and *69 VSCs from my youth.)

The biggest use of having a non-digit input was that it enabled input of variable-digit sequences in applications, such as a conference id, or to allow more then ten choices if you need them for a voice menu (and I pity you if you do). By ending with the # key, the system can distinguish 1 from 10 and know when you've finished making your choice.

I knew that much, but what irked me is why there had to be two symbols. Neither one of the two uses for the keys requires the use of both of them, so what's the point of making two?

The answer lies in the way the touch-tone system was built, a system called dual-tone multi-frequency signalling (DTMF). Unpacking the name, there are two tones that are sent simultaneously with each key press (a low tone and a high tone). Since it would have been science fiction in the 1960s to use a microprocessor to emit the proper tones, the solution had to be built using basic electronics. And, the elegant way to do that (once ergonomic research, and the need to maintain the rotary phone's alphabetical order, had dictated that a 3-row square of digits was the layout to use), was to have the row of the keypad determine the low tone and the column to determine the high tone. So voilà, the first touch-tone phones had a layout that looked like this:

So, pressing 5 generates a tone at 1336 Hz and 770 Hz, etc. But what any efficiency-minded engineer notices is that the 941 Hz tone is underused, serving only for 0. So this is the key reason why two function keys were added: it didn't cost anything in terms of complexity to do so: there were essentially already two unused buttons built in to the system. If there had been an added cost to adding two keys, the need would have had to be justified and perhaps they wouldn't have been implemented. As it was, since they were revisiting the original design anyway, they already had cause to want to future-proof things. So in fact, it isn't so much that they needed to add * and # to the keypad, it's more the case that they were essentially already there. (In fact they added not two function keys but 6, adding a fourth column with keys A-D at 1633 Hz which was only ever used by the military, to set call priority. Again, the choice brings symmetry, since there are 4 rows and 4 columns in the full design, and no leftover unused frequency pairs.)

And, in point of fact, having two function keys would have allowed some large organisations to implement their own, internal vertical functions, prefixed with the # key, while still allowing access to the phone company's VSC's on the star key. (In practice large organisations would use a PBX which probably wouldn't allow direct access to the external telephone network anyway—as anyone who remembers dialing 9 before making an outgoing call from their hotel room would well remember.)

As for why the symbols * and # were chosen, it seems largely to do with their familiarity, since standard typewriters had them, although if they had only chosen @ instead of *, as they might well have done, we would have had an uncanny precursor to Twitter!

Posted by jon at 7:01 PM in Computers 

Wednesday, 18 March 2015

On Smartwatches

Growing up in the 1980s (and having a kid's perspective at that time), it seemed inevitable that digital watches would replace analog in time. They were high tech, the LCD screens seemed so cool, and the bulky buttons looked like something out of a (1980s) sci-fi movie. And they could do so many things—alarms, running pace, stopwatches. The coolest kids (read: the biggest nerds) even had calculators!

The technology continued to advance into the 1990s, but already fashions were beginning to change. The last Casio watch I had was analog, but you could rotate a polarized bezel to reveal a digital read out, into which you could laboriously enter simple text that would scroll across the screen. Pretty impractical, but at the time it was impressive and cool. It also turned out to make a nice transition for me into the world of analog, and from then on I opted for the more "grown-up" look, and never wore a digital watch again.

In the last 20 years or so, in fact, I've hardly ever come across anyone wearing a digital watch. Partly this is due to being grown up, I'm sure, and partly due to having moved from the States to France for most of that period. But by and large, no one would argue that it is the analog watch—and the high-end mechanical analog watch especially—which dominates in advertisements and airports, and digital watches seem to have been a passing fad, leaving the watch industry more or less in the same state it was in before digital watches, once it had passed. (The low-cost, high-accuracy quartz movement, on the other hand, has been a lasting revolution—but most quartz watches today are analog, not digital.)

As anticipation builds for Apple's new foray into the "smartwatch" market, then, I can't help but get a distinct feeling of déjà vu. A recent article on the BBC ponders, "will the public be satisfied with tech-enhanced classical designs, or will people want fully-fledged apps on their wrists? If the answer is the latter, traditional watchmakers might still be caught out."

I can't help but think that people wanting read-outs of weather and Twitter on their watches are bound to be the same kind of crowd that wanted a calculator on their watch thirty years ago—and I expect this to be a fad which will follow a similar trajectory. Not that that should worry Apple—Casio sure has managed to sell millions and millions of watches, and during the period when Casio watches seemed really high tech and cutting edge, they probably did so with pretty healthy margins on their higher-end models.

Baselworld managing director Sylvie Ritter hits the nail on the head in the above article when she points out, "here we talk of timelessness, there they talk of planned obsolescence." This is the enduring competitive advantage Rolex & co. have over Apple and its imitators: I passed a shop window in Mayfair recently showcasing vintage Rolexes from the 1950s to the present day, each one commanding a high resell value for collectors. A "vintage" smartwatch probably won't even turn on in ten years (battery technology being what it is), and even if it did, its communication protocols will all be obsolete, making its apps unable to run. Such products have their market, but I anticipate that at the higher end—at least once the first rush of novelty has passed—most watch buyers will still be looking for something a little more enduring.

Posted by jon at 11:59 PM in Computers 

Friday, 16 August 2013

Site outage

Apologies to loyal readers for the recent site outage. Sometimes these things simply cannot be avoided, no matter how skilled a Linux administrator you are—even when the cause of the problem is easily identified:

A temporary wire has been put in place to reconnect us to the outside world. I expect another outage when the permanent wire is replaced, though it will probably be quite short, in comparison to the three-day outage we just experienced (the feast of the Assumption on Thursday, being a holiday in France, having contributed to the delay).

Posted by jon at 1:00 PM in Computers 

Wednesday, 28 November 2007

Back in the saddle

I am pleased to report that, after a long hiatus I am finally back in front of an IDE an writing actual software code again, and it is a very welcome return. Indeed I had a long break over coding since I switched jobs—first having a three-month period in which to shore up my previous project and train my successor (an important enough task, but as this blog has shown, one which still left me with plenty of time left over for pursuits such as Chinese). Then being put on a new project with my new employer, the first month was spent drawing up technical specifications. Which is all well and good, but for whatever reason drawing up UML diagrams and Word documents is far less satisfying than producing functioning code.

My new project has a few milestones in it to make things a little more interesting too. Besides having a junior developer under me, I also have root access on our test server—by far the most powerful box I have ever had superuser access to: it's a quad 3GHz Intel Xeon box with 8GB of RAM (nice!), running Red Hat Enterprise Linux. The middleware/database/operating system breakdown of my last three projects shows what a wide overview I'm getting of the industry:

Middleware Database Operating System
Oracle ASOracleRed Hat Linux
BEA WeblogicIngresSun Solaris
IBM WebSphereDB2AS/400

(My own sites on are JBoss/DB2/Linux, for those keeping track at home.) We'll also be using SOAP Web Services to communicate with .NET on this project, so that's another interesting aspect as well.

Posted by jon at 7:02 AM in Computers 

Monday, 28 July 2008

Computing in the clouds

This website and the others I run have always been a 'do-it-yourself' affair. Partly this is out of my own passion for technology, and partly as a showcase of my ability to design and implement an enterprise-level architecture (a J2EE app server and DB2 database running on Linux, with single-sign on, SSL, JAAS security, etc., etc.—it all makes me drool!). Even the server hardware I assembled by hand out of the individual parts.

The one thing I haven't been able to do, though, is host video. On the software level there is no difficulty to doing this: just put the video files in the right folder. On the other hand, the bandwidth costs (in terms of performance, but also in terms of money if I were to try to address it directly) are quite high. So heretofore, my home movies have been shot in a beautiful hi-resolution 16:9, and then horribly downscaled and sometimes even distorted by Google video or Youtube. I have not been happy with the results, but this was the only way to share my videos.

Now, however, my tech interests and video hosting needs have come together in an elegant solution, the programming of which was the most fun I'd had coding in a while. The buzzword these days in the tech press is "Cloud computing"—a phrase which usually is just a fancy way of saying you pay to subscribe to something. But in the case of Amazon's Simple Storage Service, generally known as "S3", they really deliver on the promise of the hype.

S3 puts the massive computing resources of in the hands of anyone who wants to use their storage and bandwidth, for any purpose. You can encrypt and store offsite backups, share documents, or... host high-bandwidth content for your website such as videos. Amazon handles the redundancy issues, and Amazon has the capacity to deliver on however large a scale you need. And because of economies of scale, the cost is very cheap.

What S3 does not do, is provide much in the way of tools with which to use their service. There are more and more third party tools availible, but I don't necessarily trust the third parties with my data. (Besides home movies, I also plan to use S3 for off-site backups.) So I have coded up my own file manager like application to handle uploading and downloading content to S3, and keeping my private data separate from material I want them to host for me publicly. (It also encrypts the private data—more out of principle than because I don't trust Amazon. That also added to the technical challenge which made writing it more fun!). As a result I am now able to offer full-quality versions of my home movies on my website, and without breaking the bank. Cool!

Posted by jon at 10:09 PM in Computers 

Friday, 8 September 2006

Cool things you can do on a Mac


This entry on Silver Mac had me calling Emilie over to watch more than once. The inverted screen (ctrl-opt-cmnd-8) and slow minimise (shift) are my favourites, but the text drag will probably be handy if I can make it a habit. Anyway, recommended for Mac users.

UPDATE 14.11.06 – Here's a really cool thing you can do on your Mac: download iTunes 7! This is a big update and really cool. Whereas up to now iTunes Music Store has been basically a customised web application posing as a thick client, the new 3d stuff really makes it an honest thick-client-that-happens-to-use-the-web in my mind. And the UI has been tweaked in so many places to make it more pleasant to use (especially for a podcast power user like myself) that I am really impressed. If you use iTunes, get this new version!
Posted by jon at 7:21 PM in Computers 

Thursday, 10 August 2006

The Mac Pro: Drool-worthy

The new Mac Pro maxes out at 16GB of RAM and 2TB of internal storage--yikes! That puts this sucker right up there with Sun and IBM in the UNIX workstation market, and with a very competitive price. Makes me wish my job put me in the Unix workstation users' market, but unfortunately you don't need a big desktop computer to program a big server.

The inside is pretty cool too, with a layout that just lets you snap the hard disks right in, with no cables. I anticipate I'll be sticking with Linux on AMD for my main system for some time to come, but this would be one slick machine for its lucky owners.

Posted by jon at 7:00 PM in Computers 

Friday, 2 July 2010

Ebooks: The Anecdotal Evidence

Recently I received a pre-course reading list for my upcoming studies at Oxford University. Since I don't have access to the Bodleian Library yet, and since I don't live in a city where it's easy to get academic books in English, I immediately turned to my iPad to see how far I could get with electronic books alone. As the results below show, all I can say is that it's a darn good thing that there's a Kindle app on the iPad!

Kindle comes out way, way ahead on selection, and the shopping experience (just the act of searching out these books), was also much easier on Amazon than on the iPad, where you have to go through a rather laborious search process on the device itself just to find out that they don't carry the book you want.

On the other hand, iBooks offers a better, more attractive reading experience than the Kindle reader. And, importantly, it uses the open epub standard, as opposed to Kindle's proprietary format. This means that books can be bought in other places and imported onto iBooks, too. I have already purchased books from O'Reilly this way, and it's a big plus. Given the choice, I would rather read in iBooks.

That said, for a case like this where I had a dozen books to hunt down, there was no way I was going to take the time to individually identify each publisher and go to their site to see whether they had an e-book publishing deal with another vendor that sells in the epub format. That's almost as much work as patching a Linux kernel! For once, Apple has been beaten in the simplicity game, and Amazon's getting a lot more ebook sales from me because of it.

Posted by jon at 6:58 PM in Computers 

Tuesday, 20 January 2009

Flash Annoyance Solved

While I hope that no one other than me was ever aware of it, there was a bug on this website a few months ago that started when I added an Adobe Flash object to the page (the Nike+ widget over on the right). If you kept the page open for a long time (like twenty minutes or more), it would suddenly prompt you for a password! (Which, whether you entered one or just hit cancel, didn't do anything.)

Besides looking unprofessional, this worried me for another reason: many of the websites on are not open to the public and require passwords, while other sites are public and don't. The default policy is to close addresses to the public, as this is a standard security policy. So, if you just enter "" followed by something random (like, then it will ask you for a password too (and only if you have one will you then be informed that no such page exists.)

So what was going on? Why was Nike's flash widget seemingly trying to take browsers onto parts of my site they have no business going to?

With a little sleuthing courtesy of Firebug, which besides being the best tool in the world for debugging website development, also lets you monitor all the HTTP requests your browser is making, I was finally able to find the answer:

It's actually a quite innocent, if somewhat poorly thought-out, facet of Flash: although each <object> can define whether cross-site scripting is allowed, you can also have a global configuration file called crossdomain.xml. This can be handy, for instance, on sites like twitter or blogspot where individual clients may add their own Flash objects, if you want to enforce a global policy. (NB I am no Flash expert, so this is a vague description that might contain inaccuaracies, so if you plan on doing something with this file for your site, find proper documentation!) The hiccup is that since I do not have a file called crossdomain.xml, the server considers it an unauthorised access and asks unsuspecting users for a password.

In the end therefore I ended up putting a dummy (non-functional) crossdomain.xml file marking it public, so this random password dialog will no longer turn up. But boo to Adobe for requiring webmasters to add a file to their servers and penalise end-users who simply visit the site if they don't. This isn't like robots.txt which 404s graciously since it is only requested by bots; in this case a 401 actually bothers the user, due to a poor assumption on the part of Adobe's engineers (viz. that if the page didn't exist, there would be a 404; they did not take the legitimate possibility of a 401 into account). And thank goodness my default security is set to BASIC authorization: had I used FORM, the user would actually have been redirected to a new page!

Posted by jon at 7:59 AM in Computers 

Sunday, 7 September 2008

Get Perpendicular!

A few years ago a story appeared on Slashdot, announcing that Hitachi had developed a new technology for hard disk storage, which would allow a huge increase in capacity.

Normally such announcements come along, are noticed, and then we forget about them until they make their way into consumer products, years later. But Hitachi was excited about this discovery. Unusually excited... as in, Schoolhouse Rock, singing and dancing excited. So much so that even years later, their surreal presentation needs to be seen to be believed (requires sound).

Posted by jon at 10:10 AM in Computers 

Monday, 8 December 2008

Hacked by Chinese, Or: Curse You!

If any of you tried to get on James' website, or any of the other password protected ones, between Thursday and Saturday last week, you may have found it impossible to log in. I was aware of the problem myself, but little did I expect to uncover what turned out to be its cause: Chinese hackers had partially infiltrated my system!. Luckily, 'partially' here is the key word: root access was never compromised, which allowed me to quickly remedy the situation and block the IP addresses in question from ever accessing the machine again. I also changed all the passwords on the machine to adhere to the highest security level, to ensure that no such easy cracking can happen again.

The account that was cracked had a very poor password, because I had not intended for it to be exposed to the outside internet. However, although the port the account was used for was blocked by firewall, I had not realised that an ssh login—an open port—would still be possible! I never used that account for remote login, so it didn't occur to me that someone else might! Looking at the failed logins in auth.log, apparently people overlook this with a lot of other software packages too. Thankfully, the account in question had very minimal permissions on the server, and an audit showed that nothing nefarious had been done with the cracked account before I was able to fix everything. Still, it was quite a wake-up call as far as internet security goes! (From my description you may think that I stupidly had logins enabled on a daemon account, but that's not exactly what was going on—I just don't want to get into too many details.) I've now implemented a much larger /etc/hosts.deny based on blacklists widely available on the internet (such as here), in addition to tightening my passwords, which was the most important thing.

Above all though it was a reassurance to me, in that I now know that I am able to deal with a security breach calmly and thouroughy, and that my UNIX knowledge is of sufficient depth that I feel certain that not only have all effects of the attack been purged from my system, but also that any future attacks of the same nature will not succeed. To be able to say with confidence that the problem has been handled is no small feat, and I can only imagine how stressful it would be if I did not have the certainty of having all my bases covered. There is definitely something to be said for having an open, transparently-functioning operating system when you need to figure out what has and what hasn't been tampered with!

And now, I have the oddest urge to go read the Cuckoo's Egg for some reason :-)

Add to Technorati Favorites
Posted by jon at 7:45 AM in Computers 

Monday, 15 September 2008

On Cloud Computing and Trust

I have already written about how I have enthusiastically adopted Amazon S3 as a solution for off-site backups, and for publishing heavier content than my home server could handle, such as video. The other day one of the hosts of Buzz Out Loud mentioned that he didn't trust his personal data in the cloud just yet. He could see that it was the way of the future, but was not yet comfortable with the trust issues. Then then this week John C. Dvorak echoed the same concerns on TWiT.

They are right of course, and I don't trust Amazon with my personal data either. I have a lot of personal data to back up, such as every e-mail I wrote or received from 1998 to around 2005 (I've let GMail handle it since then, where I technically ought to back it up via POP, but haven't...), not to mention other personal identifying data that I would not want in the wrong hands. It is not a question of trusting Amazon to abide by the terms of service—I do trust them as a company, but no company can be immune from a rogue employee or corporate espionage, and it is not easy to trust their security procedures unless you can audit them yourself at whim, which is a practical impossibility.

My solution to this problem is one that your average user, even a geek like Tom Merritt, probably can't do: I wrote my own S3 client which uses strong encryption on the I/O stream as it leaves my computer. Amazon thus stores for me a few gigabytes of what is literally useless ones and zeroes, but when I download it with my special client it is decrypted on the fly back into the original file. Such a solution requires not only the knowledge of how to code one's own S3 client, but also enough knowledge of cryptography and computer security to know whether a solution is really secure, or whether it could be cracked by those with enough resources. I'm fortunate to be in a position do do this by myself.

I'm sure that at some point there will be, and maybe there already is, a client program you can download to do this for you, where you set your own key phrase. But unless you audit the entire source code of that program, you can't be sure that it isn't sending your key out to some third party. An open source solution would allow you to check this, but frankly the time it would take to audit all the code would be longer than the time it takes to write your own (at least it was in my case). But in the absence of a widely audited and popularly acknowledeged open source way of encrypting the stream before it leaves your computer, we'll never get beyond the issue of trusting the company you're giving your data to.

(The only problem, now, is keeping my source code to my client and my key file safe, since if I lose those I would be left unable to download my own backups!*)

* Don't worry, I have worked out a solution for this, but I'm not going to post it here!

Posted by jon at 7:40 AM in Computers 

Saturday, 7 June 2008

The Wrath of Zeus

Sincerest apologies to loyal readers, who have been unable to access the site since Monday. A thunderstorm took out our DSL modem, which left us without internet or television until I could get it replaced. All is well now, though, and I now have a surge protector for the phone jack as well as the power, so this shouldn't happen again.

So, I'm glad to have gotten the TV back on just in time for the French Open finals and the Euro 2008 soccer championship. Now I just need to sort through the massive backlog of podcasts I have accumulated over the week.

Posted by jon at 5:01 PM in Computers 

Tuesday, 5 December 2006

Un très bon site (anglophone) sur les AS/400

Motivé par la possibilité de gagner un vieil AS/400 sur E-bay, je parcourais la toile l'autre jour quand je suis tombé sur un site dédié aux AS/400 que je ne connaissais pas auparavant, et qui m'a impressionné par son ampleur. Alors je me suis motivé pour en parler un peu ici, afin qu'un peu plus de monde prenne connaissance de, pour l'instant le site le plus informateur que j'ai trouvé sur les AS/400 à l'extérieur des sites d'IBM.
Posted by jon at 10:03 PM in Computers 
« August »
Older articles
Non enim id agimus ut exerceatur vox, sed ut exerceat.