Wednesday, November 08, 2006

Job postings...

I'm trying to hire some White Box testers to grow my Platform QA Team here at ZING. The job descriptions on our website (and that we've posted to various job boards) don't really do much to interest me in the job, so I'm worried that they're not appealing to the folks I'm trying to recruit, either.

I spent some time trying to come up with something short and sweet that at least answers the basic set of questions somebody might have about us, and about the jobs I have available. This was in the context of emailing a possible candidate that one of my co-workers referred to me.

Here's what I came up with in about 5 minutes between lunch and a meeting -

What we do:The Platform QA team is responsible for:
  • Writing automated tests
  • Developing testing frameworks
  • Executing automated and semi-automated tests, and reporting the results
  • Participating in design and code reviews with the development team
  • Creating utilities to improve the Development and QA processes
  • Evaluating and implementing tools (Static Analysis, Code Coverage, etc) to enhance the testing process
What we’re looking for:
  • Experience in software testing and software development, either as a white-box tester or software developer
  • Familiarity with one or more of: C++, C#, or Java
  • Someone who likes to debug complicated problems
  • Some experience with API testing is useful, but not required
About ZING:
  • We’re a Consumer Electronics technology company – we license our hardware and software designs to companies that sell them under their own brand
  • We’re a pre-IPO startup
  • Located in Mountain View, California
  • The first product based on our technology just recently went on the market – the Sirius Stiletto 100 portable satellite radio
  • Website: www.zing.net (a little short on details, but it’s getting better)
Why you should come work here:
  • You’ll get to work with a wide variety of cutting-edge technologies
  • We’ve got a great working relationship between QA and Development
  • You’ll actually get to write code, report bugs, and see them get fixed quickly


What do you think? Is that a reasonable job description, and what sorts of things strike you as missing? Should I include more information about what the actual job duties are? Or more information about the company?

Any suggestions gratefully accepted. And, should you happen to know someone who's an ace White Box tester looking for a job, send them my way, OK?

Wednesday, September 27, 2006

A couple of quick links...

First, the first ZING product is out. Sirius Satellite Radio officially announced the Stiletto 100 yesterday. This is an important new product for Sirius, and the first use of ZING's technology "in the wild". More details coming soon, but you can watch the presentation from the DEMOfall 2006 conference here.

Second, at the Intel Developer Forum, Intel showed off a prototype of an 80-core processor, which they expect to have commercially available in 5 years or less. It's an amusing bit of synchronicity that they announced this a day after my blog post discussing the inevitable adoption of massively-parallel processor designs for the desktop market.

Tuesday, September 26, 2006

Another thread on . . . threads

Thanks for reading...

First, I want to thank everybody who read Part I, especially those of you who made comments on it. I'm going to address a couple of those comments and questions first, then proceed to my philosophy of How not to shoot yourself in the foot when writing multi-threaded code in C-like languages.

In a completely non-technical aside, one of my previous articles somehow got listed on both digg and reddit, and now random people on the Internet are making cogent, well-reasoned responses to it, and to my previous posts. I feel like a "real blogger" now. Thanks, and I'll try not to let it go to my head. It's a bit ironic, in that the original purpose of this blog was to help me get over my fear of writing, and now that I know that I have an audience, it's even harder...

Okay, back to threads...

Graham Lee pointed out that Mach threads can in fact be configured to conform to something like the no-state shared model. All you have to do is create a new task, use vm_inherit() to disallow any sharing of memory regions with the old task, and Bob's your Uncle. That's a good point, and something that I might have glossed over. In many cases, you can get a separation of state between threads by doing a little additional work outside the pthreads-style interface.

Reimer Mellin mentioned that the CSP model had been around for quite some time before Occam was invented. That's true - the initial paper describing CSP was apparently published in 1978, whereas Occam didn't hit the scene until 1983 or so, when the Transputer first started to become available. Apparently, Tony Hoare (the inventor of CSP) wrote a book on a more formalized version of CSP in 1985. It's available online, but if you're not a mathematician, it might be rough going. Personally, I find that the more funky symbols used in a piece of writing, the harder it is to read. Hoare's book uses lots of symbols - there's even a six page long "glossary of symbols".

Some Dos and Don'ts

These are in no particular order, and simply represent some different ways of slicing the multi-programming pie. One or more of them may apply to your next project...

Do consider whether you need to use threads at all

Sometimes what you actually want is a separate process, in the heavy-weight, OS-level process sense. If you think about it, one program doing two things at once isn't fundamentally all that different from two programs doing one thing each. Yeah, I know, all that overhead, spawning a whole new process, setting up IPC with named pipes or whatever... But have you ever actually measured the overhead of creating a process, or transferring a few megabytes of data between two processes on the same machine?

I've done a couple of simple, two-process (GUI and background server) applications on both Mac OS and Windows, and you might well be surprised by how well this design works in practice. Of course, if your 'background' process just ends up spinning its wheels inside some hideously-complex calculation, or you actually need to send a lot of data between the GUI and the calculation engine, then you haven't actually solved your problem, and you'll have to do something more sophisticated.

Don't use threads to avoid blocking on I/O

Unless you're programming on some seriously old, backwater OS, you should have other options for your file and network I/O that don't involve waiting for the I/O to complete. This is very dependent on what platform you're using. Try hitting your favorite search engine with the terms "async I/O" or "nonblocking I/O" to read about the various options available. The complexity of these async I/O approaches can seem a little daunting, until you realize that in the simple-seeming "create a thread for background I/O" model, the complexity is all still there, it's just not as easy to see.

Do know what each thread in your program is for

You need to have an identified set of responsibilities for each thread in your system. Without a clear idea of what each thread is responsible for, you'll never be able to figure out what your data-sharing strategy needs to be. If you use UML or CRC cards to model your system, or even if your "design" is a bunch of clouds and arrows on a whiteboard, you need to be able to determine which parts of the system can run concurrently, and what information they need to share. Otherwise, you're doomed.

Don't reinvent the wheel

It's harder than you might think to write code that's truly thread-safe. You'd be well advised to see what's been done already for your language & environment of choice. If someone has already gone to the effort of creating thread-safe data structures for you to use, then use them, don't create your own.

For example, if you're already running your "main" GUI thread in an event-processing loop, consider using that message queue as your communication channel between threads. The .NET 2.0 framework provides a class called BackgroundWorker specifically to address the "trivial background calculation in a GUI app". The design of BackgroundWorker is worth reading about (Google it), even if you're on another platform. It's a nice, simple way to manage a second thread for background processing in a GUI application.

Do consider developing a strategy for detecting and/or avoiding deadlocks

Let's get this out of the way - in any non-trivial shared-memory system with conventional locking semantics, you'll never be able to predict ahead of time whether on not a deadlock will occur. I'm told there's a proof that in the general case, predicting deadlocks is equivalent to the infamous Halting Problem, which you've perhaps heard of before. If you have a reference to a research paper on this, let me know - I'd like to beat some people over the head with it. Despite all that, it's relatively easy to detect when the system is deadlocked.

Don't spawn threads in response to external events

This is really just a special case of know what each thread in your program is for. It's hard enough to coordinate all the concurrency in your program with a static set of threads. Adding in the additional complication of unknown numbers of active threads at any given time is sheer insanity.

Also, given that there's some amount of overhead involved for each thread that you create or have active, scaling up the number of threads as load increases will often have the perverse effect of decreasing throughput by attempting to improve it..

Do consider a message-passing design

I mentioned this in Part I, but you might want to consider using the message passing model, even if you're working in a shared-memory world. The basic rule here is to avoid modifying any global state from within more than one thread. When you send a message from one thread to another, you pass in all the data it'll need to access in order to complete its job. Then, you don't touch those data structures from anywhere else until the other thread is done working with them.

The only real hurdle in implementing this strategy is in keeping up the separation between threads, despite not having any language-level support for the desired partitioning. You need to be really careful to not accidentally start sharing data between threads without intending to (and without having a plan).

Don't hold a lock or semaphore any longer than actually necessary

In particular, never hold a lock across a function call. Now, this might seem a bit extreme, but remember, we're trying to manage complexity here. If you can see all the places where a lock can be acquired and released all at once, it's easier to verify that it's actually acquired and released in the right places. Holding locks for the shortest time practical also shortens the window in which you can experience a deadlock, if you've made some other mistake in your locking strategy.

Do stay on the well-trodden path

The producer-consumer model, thread pools and work queues all exist for a reason. There's a solid theoretical underpinning for these designs, and you can find robust, well tested implementations for most any environment you might be working in. Find out what's been done, and understand how it was done, before you go off half-cocked, inventing you own inter-thread communication and locking mechanisms. If you don't understand the very low-level details of how (and when) to use the "volatile" qualifier on a variable, or you haven't heard of a memory barrier, then you shouldn't be trying to implement your own unique thread-safe data structures.

Do use multiple threads to get better performance on multi-processor systems

If your program is running on a multi-processor or multi-core computer (and chances are that it will be, eventually) you'll want to use multiple threads to get the best possible performance.

Moore's Law, and what the future holds

Welcome to the multi-core era

I can't find the excellent blog post I was reading on this subject just yesterday, but here's an article by Herb Sutter that hits the high points. The bottom line is that you're not going to see much improvement in the performance of single-threaded code on microprocessors in the near future. In order to make any kind of performance headway with the next couple generations of processors, your code needs to be able to distribute load over multiple processes or threads.

The future is now

Desktop PCs with 4 processors are already readily available. Sun's UltraSparc T1 has 8 cores on one chip, and can execute 32 threads "simultaneously", under ideal conditions. Even Intel's Itanium is going multi-core, a dramatic departure from the instruction-level parallelism that was supposed to be the hallmark of the EPIC architecture (but that's a story for another time).

Some time in the very near future, the programs that you're writing will be executing on systems with 8, 16, or more processors. If you want to get anything near the peak level of performance the hardware is capable of, you're going to need to be comfortable with multi-processor programming.

Everything old is NUMA again

It's perhaps a trite observation that yesterday's supercomputer is tomorrow's desktop processor. Actually, I think it's more like there is a tide in processor design, that hits the supercomputer world, then hits the mainstream a couple decades or so later, when the high-performance folks have moved on to something else.

In the 1980's, supercomputers were all about high clock-speed vector (SIMD) processing, which is where the current generation of desktop chips have stalled out. Clock speeds aren't going to massively increase, and the vector capabilities of the Pentium and PowerPC processors, while impressive, are still limited in the kinds of calculations they can accelerate. And the processor designs are so very complex, that it's hard to imagine that there are many more tricks available to get more performance-pre-clock out of the existing designs.

When the supercomputer folks hit their own megahertz and design complexity wall, they went through their own muti-core CPU era, then in rapid succession to massively parallel MIMD systems, then to the super-cluster computers we see these days. It seems reasonable to expect an explosion of processors in desktop systems too, and for much the same reason - the standard SMP shared memory model doesn't scale well.

In particular, cache coherency becomes a major performance issue in shared-memory multi-processor systems as the number of processors increases. The conventional wisdom says that a design where all memory is shared can scale to 4-8 processors. This is obviously dependent on memory performance, cache architecture, and a number of other factors. Perhaps worryingly, this means we're not only at the start of the multi-core era in the desktop world, we're also about one processor generation away from the end of it. Gee, that went by pretty fast, didn't it?

So, what's next?

Going by the "20 years behind supercomputers" model, the Next Big Thing in desktop processors would be massively-parallel architectures, with locally-attached memory. You'd expect to see something like the Connection Machine, or the Transputer-based systems of the 90's. Given the advances in process technology, you might even be able to fit hundreds of simple processors on a single chip (actually, some folks have already done that for the DSp market).

However, the desktop computer market has shown a remarkable reluctance to embrace new instruction sets. So a design using hundreds or thousands of very simple processors with fast locally-attached memory isn't likely to succeed the currently ascendant IA32/IA64 Intel architecture. So where do we go from here? I think Intel is going to keep trying to wring as much performance out of their now-standard two chip, multiple cores per chip design. They can certainly do some more clever work with the processor caches, and with a little help from the OS, they can try to minimize thread migration.

Ultimately that approach is going to run out of steam though, and when that happens, there's going to be a major shift in the way these systems are designed and programmed. Through the multi-core era, and even into the beginning of the massively parallel era which will inevitably follow, you ought to be able to get away with following the pthreads model. You might need to think about processor affinity and cache sharing in ways you don't have to now, but it'll at least be familiar territory.

When really massively-parallel systems start to become more common, the programming model will have to change. The simplicity of implementation of the shared-memory model will inevitably give way to more explicitly compartmentalized models. What languages you'll likely use to program these beasts is an interesting question - most likely, it'll be a functional language, something like Haskell, or Erlang. I've been lax in getting up to speed on functional programming, and I'm going to make an effort to do better. I recommend that you do the same.

Saturday, September 02, 2006

Hell is a multi-threaded C++ program.

What are threads?

Every modern operating system has support for threads, and most programming environments provide some level of support for threading. What threads give you is the ability for your program to do more than one thing at once. The problem with threads is the way that they can dramatically increase the complexity of your program.

First, a little background, so we're all on the same page. In Computer Science, as in the physical sciences, using a simplified model makes it easier to discuss complex phenomena without getting bogged down in insignificant details. The trick of course, is in knowing where your simplifications deviate from reality in a way that affects the validity of the results. While spherical cows on an infinite frictionless plane do make the calculations easier, sometimes the details matter.

When Real Computer Scientists (tm) are discussing problems in concurrent programming (like the Dining Philosophers), they'll sometimes refer to a Process, which is kind of abstract ideal of a computer program. Multiple Processes can be running at the same time in the same system, and can also interact and communicate in various ways.

The threads provided by your favorite operating system and programming language are something basically similar to this theoretical concept of a Process, with a few unfortunate details of implementation.

The New Jersey approach

I couldn't find a definitive reference to the history of the development of threads as we know them today, but the model most people are familiar with arose out of POSIX, which was largely an attempt to formalize existing practice in UNIX implementations.

It turns out that POSIX Threads, Mach Threads, Windows Threads, Java Threads, and C# Threads all work very much the same, since they're all implemented in more or less the same way. The object-oriented environments wrap a thin veneer of objects around a group of extremely low-level functions, but you've got your basic operations of create(), join(), and exit(), as well as operations on condition variables and mutexes. For the rest of this rant, I'll refer to these as "Pthreads", for convenience.

Pthreads are an example of the Worse is better philosophy of software design, as applied to the problems of concurrent programming. The POSIX threading model is just about the simplest possible implementation of multi-threading you could have. When you want to create a new thread, you call pthread_create(), and a new thread is created, starting execution with some function you provide. The newly-created thread is created by allocating some memory for a stack for the new thread, loading up a couple of machine registers, and jumping to an address.

Shared state - two models

In the Pthreads model, all of your threads share the same address space. This makes sharing data between threads very simple and efficient. On the other hand, the fact that all of the state in the program is accessible and changeable from every thread can make it very difficult to ensure that access to all this shared state is managed correctly. Race conditions, where one thread attempts to update a data structure at the same time that another thread is trying to access or change that same structure, are common.

The problem with the all state is shared model is that it doesn't match up very well with what you're generally trying to accomplish when you spawn a thread. You'll normally create a new thread because you want that thread to do something different than what the main thread is already doing. This implies that not all of the state in the parent thread needs to be available to be modified in the second thread. But because of the way threads are created in this model, it's easier (for the OS or language implementor) to share everything rather than a well-defined subset, so that's what you get.

The other major model for multi-threading is known as message-passing multiprocessing. Unless you're familiar with the Occam or Erlang programming languages, you might not have encountered this model for concurrency before.

There are a number of variations on the message-passing model, but they all have one thing in common: In the message-passing model, your threads don't share any state by default. If you want some information to go from one thread to another, you need to do it by having one thread send a message to the other thread, typically by calling a function provided by the system for just this purpose. Two popular variants of the message-passing model are "Communicating Sequential Processes" and the "Actor model".

You can get a nice introduction to the message-passing model by reading the first couple chapters of the Occam Reference Manual, which is apparently available online these days (I got mine by digging around in a pile of unwanted technical books at a former employer). Occam is of course the native language of the Transputer, a very inventive but commercially unsuccessful parallel processor architecture from the UK which made a big splash in the mid-80's before vanishing without a trace.

Why would you want to learn about this alternative model, when Pthreads have clearly won the battle for the hearts and minds of the programming public? Well, besides the sheer joy of learning something new, you might develop a different way of looking at problems, that'll help you top make better use of the tools that you do use regularly. In addition, as I'll explain in Part II of this rant, there's good reason to believe that message-passing concurrency is going to be coming back in a big way in the near future.

Enough rope to hang yourself with

As I mentioned earlier, the Pthreads model implies that all of your program's address space is shared between all threads. Most (all?) implementations allow you to allocate some amount of thread-local storage, but in general, the vast majority of your program's state is shared by every thread. This implies that every thread has the ability to modify the value of any variable, and call any arbitrary function, at any time. This is a really powerful tool, but like all powerful tools, it can be dangerous if misused.

It's extremely difficult to predict what the behavior of even fairly simple code will be, when multiple threads can run it simultaneously. For more complex code, the problem rapidly becomes intractable. In a low-level language like C, you need to know the intimate details of how the compiler will optimize your code, which operations are guaranteed to be completed atomically, what the register allocation policy is, etc, etc. In a JIT-compiled language like Java or C#, it's impossible to even know what machine code will be used at runtime, so analyzing runtime behavior in detail just isn't possible.

An unlocked gun cabinet

I think one of the major problems with Pthreads is that it's too easy to make something that almost works. This then leads to an unwarranted belief that multi-threaded programming is simple. For example, say you've got a simple interactive GUI application, and you think that the application takes too long to calculate something after the user presses a button. The "obvious" solution to this problem is to have your button-press handler spawn off a new thread to perform the long-running operation.

So you try this, and it works perfectly on the first try - the child thread launches, and then calculates away while the main thread goes back to handling the UI. You still need to notify the main thread when the calculation is complete, but there any number of easy ways to do this, and you probably won't have much trouble figuring that out. Gee, that wasn't so difficult, I wonder why people say that multi-threaded programming is difficult?

It's this sort of ad-hoc approach to creating threads that gets people into trouble. They create a new thread to solve one problem, and then another, and then they suddenly realize that thread A and thread M are interacting in a bad way. So they protect some critical data structures with mutexes, and before they know it, they're trying to debug a deadlock situation where they don't even understand how those two pieces of code could interact.

You need to really think about why you're creating a thread, before you spawn it. I'm not going to go so far as to say that creating threads while your program is running (rather than at startup) is de-facto proof that you're doing something wrong, but it's definitely a strong indication that you're not thinking about what your threads are for with any great rigor.

How to not shoot yourself in the foot

... is going to be the subject of Part II. Sorry for the cliff-hanger ending, but I wanted to get at least a little of this published, and potentially get some comments on it, before finishing the rest.

Sunday, August 20, 2006

A Cockatoo joke...

I'm still working on my "why threads are bad" rant, but in the meantime, here's a joke with a Parrot in it...

One day, a Cockatoo walks into an antique shop. The owner walks over and says to the Cockatoo:

"Hello there, can I help you with anything?"

The Cockatoo points his wing at a chair, and says to the shopkeeper:

"What can you tell me about this chair?"

The shopkeeper smiles at the bird, and launches into his best sales speech:

"I must say, you've got an eye for exceptional furniture. This chair is executed in the Lous XIV style, we estimate the manufacturing date to be around 1880, and the fantastic patina of the wood shows that it's been exceptionally well cared-for..."

The Cockatoo interrupts the sales pitch, and says:

"Yes, yes, that is all well and good, but what I really wanted to know is -

how does it taste?"

Thursday, August 10, 2006

Now they have two problems...

Today, I'm going to write a bit about programming. But first, a short detour into the wonderful world of USENET, and the adaptability of certain quotes to any situation...

There's a fairly well known quote (among programmers, at least), that goes like this:

Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.

-- Jamie Zawinski

Jamie posted this to USENET back in 1997, and people have been quoting it ever since. I did some more searching, and I found an earlier variation, with citations going back as far 1988. Yes, Google Groups does have (some) USENET postings going back nearly 20 years. The older version is this:

Whenever faced with a problem, some people say `Lets use AWK.'
Now, they have two problems."

-- D. Tilbrook

Cool, so this is apparently one of those all-purpose jokes, much like any ethnic joke. Actually, that could be pretty interesting:

Whenever faced with a problem, some people say "Lets have the American do it." Now, they have two problems.

-- M. Bessey

Wow, that's comedy gold! Okay, maybe not. But it leads nicely into the topic I actually wanted to talk about:

presenting...

Mark's list of the Top Four programming technologies that fit into the "now they have two problems" template:
(in no particular order)

1. Threads
2. XML
3. Singleton objects
4. Regular Expressions

The common theme here is that these are all useful techniques, but are often misused by well-meaning programmers. I've seen more grief caused by misapplication of these technologies than anything else in my career. I'm going to write up a couple of quick rants on each of these subjects. This will be good for two reasons:

1. It gives me something to write about for the next couple of days.
2. I can vent a little about some particularly irritating instances of these things that I've seen.

Monday, August 07, 2006

Like an out-take from The Birds

Continuing the "Wildlife Adventures in Suburbia" theme from the last post...

As I was driving up to my house Sunday evening, I noticed about a half-dozen or so little Sparrow-looking birds pecking at the sidewalk at the end of my driveway. I thought "that's a little odd, I wonder what they're eating?". As I was gathering up my stuff out of the car, I saw that another ten or so were perched on the fence around my front yard. Curiouser and curiouser...

I went around to the back of the Jeep, got my groceries, and turned towards the house, only to see that there were dozens and dozens of these birds perched on my fence and the gutters of the house, and just hopping around on the roof. at this point, I'm starting to get a bit freaked out. I've seen small groups of birds around the house before, but nothing like this.

Just as the "I feel like I'm in a Hitchcock movie" vibe really started to take hold, one of my neighbors drove by in a rather loud Ford Bronco. Apparently the birds didn't like either the sound or the look of the thing, so they all took off at once - from the street, off my fence and roof, and from the neighbor's yard, where I hadn't noticed that they were also congregating.

In total, probably a hundred of these tiny little birds took off from the ground, coalesced into a swirling cloud, and headed out to Santa Clara, presumably on a mission to freak someone else out.

Monday, July 31, 2006

Not a story about Black Widow Spiders

On Sunday, Yvette and I were rearranging stuff in the garage in preparation for the roofing guys to come in and tear the roof off way too early in the morning on Monday. During the course of moving all the boxes around and covering stuff up, we managed to disturb a spider. I looked over at Yvette, and I saw a large, globular black spider crawling up her neck. Now, it so happens that Black Widow spiders aren't all that uncommon around here, and from a few feet away, this thing really, really looked like a Black Widow.

I attempted to calmly say "Hold still" so I could brush it off her without her getting bitten, but apparently my eyes gave me away, and Yvette totally freaked out. So she's shaking all of her clothing out and moving around, while I'm trying to get her to stand still so I can find the stupid spider and get it off her before she gets bitten. Mentioning that I thought the spider was a Black Widow was decidedly not helpful. It probably would have been comical if we weren't doing such a good job of completely panicking each other. Yvette managed to get the spider off of herself, and I eventually recovered it. It turned out to most likely be Steatoda grossa, a much less dangerous relative of the Black Widow.

After the incident, Yvette and I talked about what we might have done differently. We didn't really come up with anything, other than possibly running "Spider Drills". I'd just walk up to her and calmly say "Don't move" or something similar, and we'd practice not freaking each other out. I really hated the feeling of the whole thing spiraling out of control like that, with everything I said and did just making the situation worse.

Friday, July 28, 2006

Damned zombie python processes...


If you're running Mac OS X, and you've installed XCode, try this:

open up a Terminal window, and type
ps -x |grep -i python

Do you see dozens and dozens of processes named (python)? Then you'll probably be interested in the discussion here.

This turns out to be due to a bug in XCode 2.3's distributed builds functionality. There's this sctwistd process that gets launched at startup, and every time you log in and out (even switching to another user counts), it spawns off a couple of python processes that get orphaned from their parent. These zombies accumulate over time, eventually leaving your Mac unable to launch any more programs.

Long story short, if you don't use dedicated network builders and you don't want to fill up your process table with Zombie Python Processes from Hell, perform these commands in a Terminal window, then reboot:

cd /System/Library/LaunchDaemons
sudo launchctl unload -w com.apple.dnbobserver.plist

Now if I can just figure out why Nikon View Monitor is being launched, even though I don't even use Nikon View anymore, I'll be a happy camper. I just don't like running software that I don't need.

Thursday, June 29, 2006

One Small Victory...

Last night, walking back to work from dinner, I happened to pass by the Scientology building in downtown Mountain View. As I passed by, I studiously ignored a Scientology drone as he tried to hand me a pamphlet explaining some of the finer points of L. Ron's philosophy.

As he asked me "Would you like a pamphlet?", I thought to myself "No, but I've got something for you". At which point I loosed the Silent-But-Deadly fart I'd been holding in for two blocks.

Maybe he can use those Dianetics mental control techniques to resist the urge to pass out...

One small victory. Have you farted on a Scientologist today?

Wednesday, January 25, 2006

Fun with eBay lenses

I'll need to add a couple of pictures to this post to illustrate, but I just received a lens I purchased on eBay. It's a well-used, manual-focus Vivitar 70-150 "Macro" zoom. I put Macro in quotes because it doesn't appear that this lens gets anywhere near the range of true macro 1:1 magnification. The closest focus distance is a couple of feet.

It's kind of an interesting challenge using a manual lens on the d50. Obviously, the lens is manual focus, but apparently because the d50 also lacks some mechanical linkage to read aperture information, the camera can't set the aperture either, so only the fully Manual mode really works.

So, I end up setting the aperture on the lens, the shutter speed with the camera, and figuring the exposure by taking a test shot and adjusting based on what the histogram shows. I got a couple of decent pictures of Jeremy the Attack Cockatoo before he got bored and tried to eat me.

http://web.mac.com/mbessey/iWeb/Site/Vivitar%2070-150.html


It appears that this lens, at closest focus, gives about a 19cm wide field of view, as opposed to the 17cm I get with the 18-55 lens. Not exactly a "macro" zoom. I wonder if the fact that the lens rattles when I shake it has anything to do with that? Maybe there's supposed to be some additional extension at the end of the range, or something? I may just take it apart and see what I can do.

Thursday, January 12, 2006

On the subject of camera lenses

From an email I sent to a friend. The question at issue revolves around me getting my first digital camera with interchangeable lenses (a Nikon d50). The problem with having a choice of lenses is...needing to make a choice.

I've been thinking about the "how do I know what lenses I need?" question lately...

I was reading online that the conventional wisdom holds that the vast majority of pictures taken with a zoom lens are taken at either the minimum or the maximum focal length. So, I decided to check my own pictures and see if that's true. I found this program online called jhead, which will dump the exposure info out of digital camera JPEG files.I ran my entire iPhoto library through it, and analyzed the results with a Perl script.

I found out some interesting things about my picture taking habits.

Looking at the data for the E-10, which is the camera that I've taken the most pictures with, the distribution of zoom focal lengths look like this. To convert these from the E-10's smaller sensor size to the equivalent for 35mm field of view, you'd need to multiply by about 4.

focal # of pictures
9.0mm 1352
10.0mm 70
11.0mm 57
12.0mm 51
13.0mm 39
14.0mm 43
15.0mm 50
16.0mm 34
17.0mm 103
18.0mm 42
19.0mm 41
20.0mm 49
21.0mm 15
22.0mm 27
23.0mm 11
24.0mm 25
25.0mm 8
26.0mm 35
27.0mm 16
28.0mm 16
29.0mm 15
30.0mm 16
31.0mm 22
32.0mm 34
34.0mm 21
36.0mm 488

The conventional wisdom is confirmed, I guess. It's kind of fascinating to me that it's as lopsided as it is in favor of wide-angle though. I mean, I suspected that would be the case, but I didn't expect that it'd be so extreme.

It also interesting that there's that peak at 17mm (68mm equiv). Unfortunately, iPhoto won't let me search by focal length, but a quick visual scan through the library shows that most of these are relatively close-up shots of people's faces.

So based on the data, what *I* really need is one wide angle zoom lens, one portrait taking lens, which can probably be a non-zoom lens, and one telephoto zoom. Since the E-10 had neither a truly wide-angle or a truly high-magnification telephoto, it's not entirely clear what actual range I need on either end of the scale.

On the wide end I think the decision is easier. I decided to just get the widest wide-angle zoom I could find, which is how I ended up with my 10-20mm (15-30 35mm eq) zoom. So far, that's working out pretty well for me. And now I can shoot a 180 degree panorama in two shots, which is pretty cool...

For portraits, a 50mm lens is pretty close to the 45mm lens my data says I'd want. I'd just have to step a little farther away. 50mm being the "standard" length for 35mm lenses, the basic Nikon 50mm f/1.8 lens is relatively inexpensive at $100 or so. Now, I do have that range covered with my existing zoom lens, but the fixed lens gathers way more light at f/1.8 than the zoom does at its maximum f/5.6 - doing the calculation, that's about 9 times as much light, which will make all the difference in whether I need to use a flash or if I can use available light.

On the telephoto end, I'm at a bit of a loss. I wasn't very happy with the limited telephoto on the E-10, so I'm pretty sure I'll need something considerably longer than the 17-55 lens that came with the d50, which isn't even as good as that. But how far do I need to reach? I don't think I want a lens that absolutely requires the use of a tripod for every shot, and it would be pure insanity to pay over $1,000 for an optically stabilized (VR in Nikon-ese) lens.

One interesting (but not surprising) thing is that almost all of my (semi-Macro) flower pictures are at the far end of the E-10's zoom, as well. They look pretty good at that magnification, so the equivalent focal length on the d50 (which would be about 90mm)would be a good thing to have.

I should just get the 50mm lens, I guess, and maybe get a cheap telephoto zoom and explore what range I need before plunking down the money on a "serious" telephoto lens.