Skip to content

Sunday Times Brain-Teaser 660 – An Efficient Type

by BRG on January 14, 2021

by R. Postill

From The Sunday Times, 3rd March 1974

My typewriter had the standard keyboard:

row 1: QWERTYUIOP
row 2: ASDFGHJKL
row 3: ZXCVBNM

until I was persuaded by a time-and-motion expert to have it ‘improved’. When it came back I found that none of the letters was in its original row, though the number of letters per row remaining unchanged. The expert assured me that, once I got used to the new system, it would save hours.

I tested it on various words connected with my occupation — I am a licensed victualler — with the following results. The figures in parentheses indicate how many rows I had to use to produce the word:

BEER (1)
STOUT (1)
SHERRY (2)
WHISKY (3)
HOCK (2)
LAGER (2)
VODKA (2)
CAMPARI (2)
CIDER (3)
FLAGON (2)
SQUASH (2, but would have been 1 but for a single letter)

Despite feeling a trifle MUZZY (a word which I was able to type using two rows) I persevered, next essaying CHAMBERTIN.

Which rows, in order, did I use?

From → Uncategorized

5 Comments Leave one →
  1. Brian Gladman permalink

  2. Frits permalink

    @Brian, line 61 tests if at least one of the letters in “SQUASH” is in the first row (which is not the same as what line 60 describes)

    Do you have any idea why lines 17-21 seem to be faster than:

    but have similar speed as

  3. Brian Gladman permalink

    @Frits Thanks for the bug fix. One of the nice things about publishing code is that it can often mean that there are more eyes on it. Had I not published this code I suspect that I would have remained in blissful ignorance about that bug forever!

    The only good answer that I have to your question about lines 17 to 21 is that I tried several approaches, including comprehensions, and chose the fastest of them 🙂 But I hadn’t noticed how close one of the comprehension you tested was to my existing code and I have hence adopted it as it is neater (thanks for looking at this).

    Python strongly favours simple code so I have got into the habit of writing code without comprehensions and then converting it into a comprehension and doing a speed test. Its usually a surprise when I discover which is best since I have not found any magic to predict which will be faster.

    It would be worthwhile looking at this. Comprehensions are designed to allow the internal loops to be compiled fully into C code but if anything in the code requires calls back into pure Python then this will almost certainly defeat the speed gain. And I suspect that this might be what is happening. This could be checked by disassembling the generated Python byte code but my TODO list is already too long so its not something I have done.

    Its like PyPy and C Python. Sometimes the gains are truly enormous in running with PyPy but other times there is very little difference and sometimes there is a large performance drop. On this one C Python ran it in 120 milliseconds on my system while PyPy took 460 milliseconds!

    • Frits permalink

      @Brian, thanks for the long answer.

      For me simple code means comprehensions, I don’t have problems reading them but I don’t start with them from scratch (yet). Normally I start with a lot of “for loops” to see if it works and later on will turn code in to comprehensions.

      I also noticed that {…} can be faster than set(…) and using “in [0, 1, 2]” or “in {0, 1, 2}” sometimes seems to be more efficient than “in range(3)”.

      • Brian Gladman permalink

        @Frits, Yes, Python is full of surprises when in pursuit of speed! Jim Randell’s
        discovery of a fast bit count using bin() is an interesting example of this because
        it is using a string function and strings in Python are a type where many of its
        methods are implemented in pure C.

Leave a comment to Frits Cancel reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS