For all information relating to the Mozilla Seabird project, see MozillaLabs.com/Seabird

For information about the designer, see his website Billy-May.com


A fundamental variable that has become obvious through this project is how we use not just our phones, but our desktops and laptops.  With the line between the devices becoming so blurred by phones’ rapid enhancement and devices like netbooks and UMPCs, there still remains a fairly concrete factor in each device.  Do you use the device to create content or only consume it?  Would you write a novel with a given device, or only read one?  Could you mix and produce audio tracks on the device, or only queue up your favorite playlist?

While usage behavior would likely vary wildly for each individual, there is likely a distinct line that defines what we would do with our phones.  What then, are the variables that make this distinction?  How big the qwerty keys are?  Size of the screen?  CPU power?  Is this usage barrier even superable with today’s technology?  To partly answer, let’s look at where the barrier is already crumbling.

In the beginning, camera phones accomplished little more than data capture; at best they showed that some person was somewhere at some time.  Creating genuinely valuable content was just not possible with only 1.3 megapixels.  Fast forward and we get the Sony Ericsson Idou, its 12.1 megapixel camera might not suffice for the presidential portrait, but it’s good enough for an upside-down friend, a keg and three ‘supportive’ frat brothers.  Throw in very basic image adjustments like PhotoGene and you have everything a non-professional photographer would ever need out of a camera.

What other fields are ripe for the picking?  How can our phones adapt and grow from mere consumption to actual creation?  What tools that we now use for data capture (notepads, voice recording, etc) could be evolved into true data creation?  Remember, think in physical terms and sound off below.


The role of beauty in our devices is not a greatly debated topic; we want it, simple as that. Sexism aside, I want my phone to be a hot, sexy bitch that fits inside my pocket. The question, then, is how do we get there? The tyranny of the iPhone’s irreproachable aesthetic has been amongst the hardest thing for manufacturers to overcome. As content now reigns supreme, it becomes futile to resist the efficiency of Apple’s damnable rectangle. How does the Mozilla Phone break from that aesthetic? How does it visually stand out and make its own statement about your content, the web, and you as a user?

Of the formal problems facing the Mozilla Phone, a revealing perspective is how the material choices reflect on functional elements. Consider this breakdown; if we separate the device into the three discrete elements of structure (metal case), substrate (glass touchscreen) and content (LCD), the object as a whole becomes directly relatable to other categories. Consider cars with a metal chassis, glass windows and interior comfort, or a fine watch’s metal band, sapphire crystal and watch face, or even a building’s steel superstructure, glass façade and interior living spaces. All employ a quasi-hierarchical relationship between structure, substrate and content, if loosely applied.

Of course, these are sweeping generalities, but they allow useful comparisons for thinking about the problem at hand. For example, if you messed with the typical relationship between structure and substrate like below, how would that translate to the phone?

Georges Pompidou, Rinspeed Exaxis, Rinspeed Zazen, and Corum Golden Bridge

Pompidou Centre, Rinspeed Exaxis, Rinspeed Zazen, and Corum Golden Bridge

How else could that paradigm be extended and reworked? How can we get away from the monotony of polished aluminum and tinted glass? I’m focusing on material choice and composition here, but what other avenues could we consider in creating a unique aesthetic? I pointed out watches and cars as possible form inspiration, but what other categories might give useful input to gadget design? How beholden are we to the efficiency of rectangular content? I would love to break out the overall shape, but I almost feel there’s something inherently unrealistic about that. I’m only beginning to discover the difficulty in designing a visually unique phone while maintaining functionality, so don’t expect this to be the last post on the topic.

Lets see some comments people.



Our phones do not exist in sterile black studios being operated by a disembodied hands, they are surrounded by the wonders of today but for some reason do surprisingly little with it.  At any given moment, most people might be within 20 feet of a full size QWERTY keyboard but we sit there and toil away with a keyboard that’s either minuscule or completely virtual.  Why are we struggling to look up a restaurant review when you could just take a snapshot of the sign outside and, with OCR and GPS, go straight to the Yelp review without typing a thing?

The point is this: the things we interact with in our physical world should also interact with our phones.  Geode from Mozilla Labs is a fantastic example, by creating geolocational hotspots based on GPS or W3C data, it alters the user experience according to the situation.  For example, you could target all local theaters to send you phone into silent mode, or target roads + a 20 foot buffer to put your phone into car/speaker mode.  Why do all of the interactions have to happen between the user and the phone?

Beyond a phone’s physical environment it still remains inexorably tied to our vast network of ‘tubes.  Case in point, Google leveraged its off-site computational power when it enabled voice search by only using the phone to record the voice and nothing more.  What other resource intense activities could take place somewhere else while you only use the phone to access it?  How about using your desktop to stream your 250GB music collection while your precious 8GB of local memory holds just email, pics and contacts?

How can our common and expected environments inform and interact with our phones?  Why do we see the only inputs as ASCII character and positional mousing?  What if your home phone cried out “Bring Umbrella”  when the forecast was bad in the morning?  What if the phone looked up places to get Cocoa when the temp dropped below 40º?  (yes, these are awful ideas, the key is to use the improv addage of “yes and…”).  Lets see some comments and suggestions after the jump.   Also, try IT services from Allstream.

Update: Just saw this pop up on slashdot from the ongoing TED conference.  Amazingly relevant.


For all its failures, the Blackberry Storm’s clickable screen was an innovation ahead of its time.  Roundly criticized for inhibiting typing speed by adding the need to depress the screen-button before clicking it again, it dared to add a whole other layer of tactile communication to what was originally just a visual display.  Reversing this fundamental concept and throwing in a dash of technological whimsy, I present the first MozPhone Throw-away Concept: OLED Blackberry.


But this is no great innovation, it’s an Optimus Keyboard + a Blackberry 7130.  What I do think is significant is the goal it works toward and the channels of communication it opens up.  If the greatest challenge to phones are their limited ability to receive and provide information (see thumboard, see 3″ display), then it follows that every square inch of the phone must eventually be utilized in as many different ways as possible.  What if instead of predictive text, a predictive keyboard displayed the expected characters, allowing the user to preempt a mistake rather than fixing it after the fact.


Another interesting way of looking at this solution is how the content is separated from the interface.  No screen real estate is sacrificed for “New Tab” or “Back” buttons, each channel of communication perfectly tailored for the purpose it serves best.  It takes what was a maddeningly soft interface and makes it hard.

So where does this idea go?  How can every square millimeter of a phone both input and output information to the user?  How can our buttons, screens, speakers and trackballs communicate to more of our senses?  How about a volume wheel that gets harder to turn the louder it goes?  Lets see some comments below.




So, in designing a Mozilla branded phone, half the equation is how Mozilla figures into things.  We must understand its significance to users then communicate that across disciplines and into the phone’s physical embodiment.  In approaching this problem, I had very little to go on beyond my own 4 years of personal experience using Firefox and Thunderbird.  Mozilla doesn’t buy airtime and make abstract expressions of their identity and I doubt they’ve ever paid an agency to put together a mood board.

That leaves us with the combined insight of its users in defining what Mozilla and its subordinate products mean to the phone.  What will make it a Mozilla Phone?  How will the Mozilla Phone make you feel?  What philosophies of user interaction can be ported over?  Lets see some comments after the jump and I’ll give you my impression of the overall brand impact of Mozilla.


In what was the most relevant comment I received about my last project, Ron Brinkman linked me to his great discussion on non-euclidean display modes.  In the small, enclosed mobile display space, this technology seems to deserve more attention than it’s getting.  Up to now, the biggest innovation in information display has been zoom and pan, perfected by the iPhone and copied by the Palm Pre with the multi-touch pinch-to-zoom feature. But why should we restrict the geometry of how we manipulate the information?  (Having worked on the Nike Hindsight concept, I’m no stranger to the benefits of visual distortion)

Using the Vimeo vid above as a starting point, where could you take this in the physical domain?  Are there more sophisticated and appropriate means of mathematically distorting the information, e.g. spherical vs. hyperbolic?  Could the screen extend around curved edges onto the back? What about a static point of enlargement that you pan the webpage into? Gimme some insights people.



The world is abuzz with the Palm Pre, hailed as the first real possible iPhone killer, it offers every advantage of the fabled phone with extras kicked in.  But is it a fair fight? you don’t have to look very hard to find features and interface enhancements borrowed from other phones and OS’s.  Clearly the Pre wouldn’t be what it is without its predecessors’ hard won innovations.  Using the most obvious example, multi-touch technology and ideas have been around since ’82, but Apple was the first to envision its use in the mobile space.  It’s a clear winner in the user interface game, no question about that; Palm must have liked it too, they included it on the Pre.

However, if you sit through Palm’s keynote speech, they clearly don’t care about what they borrowed very much.  They throw in multi-touch once just to show its there and then dismiss it.  The bottom-of-screen app bar is an assumed interface, nevermind its similarity to the iPhone.  The fast and reliable Webkit browser, no longer unique or special, move along.  Robust 3rd-party driven app marketplace? Check.

What they do love talking about is how great WebOS is at the one thing others aren’t, multi-tasking and replicating the desktop environment.  If you look at all the ways that the Pre benefited from its predecessors, you might find their failings informed more of Palm’s decisions than their successes. But of course, learning from your enemies’ mistakes is not exactly a new and innovative strategy.  The absurdity lies in Apple’s threats of pursuing anybody that “rips off our IP” when the multit-touch tech in question comprises so little of Palm’s fantastic UI lead over Apple.  Google might as well and try to sue Palm over its Card system of multi-tasking given Android’s “unique” application drawer.  In the end, the lawyers are fighting over peanuts of intellectual property when the war is being won by well executed systems that make phones act how we want them to.

The question now, of course, is what did the Pre mess up?  what did the iPhone not do quite right?  What Fail Giants can we clamber on top of and create something new?  Very open ended questions, so there’s no excuse for not having a comment or two.


Anyone who has once owned a RAZR knows the 800 milliseconds of hell that exists between pushing “contacts” and actually seeing the menu come up.  Such a simple operation is no longer a great tax on the processors of our phones, but of course we expect a great deal more these days than just a list of recent calls.  With all our demands, we are still gladdened when a manufacturer gives the phone enough power for snappy transitions and speedy website rendering.  Case in point, for all its praise, the Pre doesn’t do anything fantastically new, it just does things fantastically fast, be it through an elegant interface or its beefy OMAP processor.

Similarly so, one of the death knells of the Blackberry Storm was its wonky and slow OS that layed a touchscreen over its OS like a cheap paint job.  The question is: What’s the future?  Are we going to see speed comparisons flatten out as manufacturers and engineers catch up with each other?

More importantly, what can we do about speed?  Nobody here is about to revolutionize 65 nanometer CPU architecture, but how can we design around and for this problem?  How about heatsinking the CPU off the back of the aluminum case and overclocking the processors we already use? (comically illustrated below)


The phone above is obviously absurd, but it hopefully illustrates a point.  Alternately, should we forego the glossy, translucent buttons that look oh so pretty but take oh so longer to render?  Lets see some comments and suggestions as to how industrial or interface design could sate our thirst for a snappy, snappy phone and whether it can be done while maintaining the high standards of beauty we’ve come to expect.



Get every new post delivered to your Inbox.