For all information relating to the Mozilla Seabird project, see MozillaLabs.com/Seabird
For information about the designer, see his website Billy-May.com
September 23, 2010
February 16, 2009
A fundamental variable that has become obvious through this project is how we use not just our phones, but our desktops and laptops. With the line between the devices becoming so blurred by phones’ rapid enhancement and devices like netbooks and UMPCs, there still remains a fairly concrete factor in each device. Do you use the device to create content or only consume it? Would you write a novel with a given device, or only read one? Could you mix and produce audio tracks on the device, or only queue up your favorite playlist?
While usage behavior would likely vary wildly for each individual, there is likely a distinct line that defines what we would do with our phones. What then, are the variables that make this distinction? How big the qwerty keys are? Size of the screen? CPU power? Is this usage barrier even superable with today’s technology? To partly answer, let’s look at where the barrier is already crumbling.
In the beginning, camera phones accomplished little more than data capture; at best they showed that some person was somewhere at some time. Creating genuinely valuable content was just not possible with only 1.3 megapixels. Fast forward and we get the Sony Ericsson Idou, its 12.1 megapixel camera might not suffice for the presidential portrait, but it’s good enough for an upside-down friend, a keg and three ‘supportive’ frat brothers. Throw in very basic image adjustments like PhotoGene and you have everything a non-professional photographer would ever need out of a camera.
What other fields are ripe for the picking? How can our phones adapt and grow from mere consumption to actual creation? What tools that we now use for data capture (notepads, voice recording, etc) could be evolved into true data creation? Remember, think in physical terms and sound off below.
February 9, 2009
The role of beauty in our devices is not a greatly debated topic; we want it, simple as that. Sexism aside, I want my phone to be a hot, sexy bitch that fits inside my pocket. The question, then, is how do we get there? The tyranny of the iPhone’s irreproachable aesthetic has been amongst the hardest thing for manufacturers to overcome. As content now reigns supreme, it becomes futile to resist the efficiency of Apple’s damnable rectangle. How does the Mozilla Phone break from that aesthetic? How does it visually stand out and make its own statement about your content, the web, and you as a user?
Of the formal problems facing the Mozilla Phone, a revealing perspective is how the material choices reflect on functional elements. Consider this breakdown; if we separate the device into the three discrete elements of structure (metal case), substrate (glass touchscreen) and content (LCD), the object as a whole becomes directly relatable to other categories. Consider cars with a metal chassis, glass windows and interior comfort, or a fine watch’s metal band, sapphire crystal and watch face, or even a building’s steel superstructure, glass façade and interior living spaces. All employ a quasi-hierarchical relationship between structure, substrate and content, if loosely applied.
Of course, these are sweeping generalities, but they allow useful comparisons for thinking about the problem at hand. For example, if you messed with the typical relationship between structure and substrate like below, how would that translate to the phone?
How else could that paradigm be extended and reworked? How can we get away from the monotony of polished aluminum and tinted glass? I’m focusing on material choice and composition here, but what other avenues could we consider in creating a unique aesthetic? I pointed out watches and cars as possible form inspiration, but what other categories might give useful input to gadget design? How beholden are we to the efficiency of rectangular content? I would love to break out the overall shape, but I almost feel there’s something inherently unrealistic about that. I’m only beginning to discover the difficulty in designing a visually unique phone while maintaining functionality, so don’t expect this to be the last post on the topic.
Lets see some comments people.
February 5, 2009
Our phones do not exist in sterile black studios being operated by a disembodied hands, they are surrounded by the wonders of today but for some reason do surprisingly little with it. At any given moment, most people might be within 20 feet of a full size QWERTY keyboard but we sit there and toil away with a keyboard that’s either minuscule or completely virtual. Why are we struggling to look up a restaurant review when you could just take a snapshot of the sign outside and, with OCR and GPS, go straight to the Yelp review without typing a thing?
The point is this: the things we interact with in our physical world should also interact with our phones. Geode from Mozilla Labs is a fantastic example, by creating geolocational hotspots based on GPS or W3C data, it alters the user experience according to the situation. For example, you could target all local theaters to send you phone into silent mode, or target roads + a 20 foot buffer to put your phone into car/speaker mode. Why do all of the interactions have to happen between the user and the phone?
Beyond a phone’s physical environment it still remains inexorably tied to our vast network of ‘tubes. Case in point, Google leveraged its off-site computational power when it enabled voice search by only using the phone to record the voice and nothing more. What other resource intense activities could take place somewhere else while you only use the phone to access it? How about using your desktop to stream your 250GB music collection while your precious 8GB of local memory holds just email, pics and contacts?
How can our common and expected environments inform and interact with our phones? Why do we see the only inputs as ASCII character and positional mousing? What if your home phone cried out “Bring Umbrella” when the forecast was bad in the morning? What if the phone looked up places to get Cocoa when the temp dropped below 40º? (yes, these are awful ideas, the key is to use the improv addage of “yes and…”). Lets see some comments and suggestions after the jump. Also, try IT services from Allstream.
Update: Just saw this pop up on slashdot from the ongoing TED conference. Amazingly relevant.
February 3, 2009
For all its failures, the Blackberry Storm’s clickable screen was an innovation ahead of its time. Roundly criticized for inhibiting typing speed by adding the need to depress the screen-button before clicking it again, it dared to add a whole other layer of tactile communication to what was originally just a visual display. Reversing this fundamental concept and throwing in a dash of technological whimsy, I present the first MozPhone Throw-away Concept: OLED Blackberry.
But this is no great innovation, it’s an Optimus Keyboard + a Blackberry 7130. What I do think is significant is the goal it works toward and the channels of communication it opens up. If the greatest challenge to phones are their limited ability to receive and provide information (see thumboard, see 3″ display), then it follows that every square inch of the phone must eventually be utilized in as many different ways as possible. What if instead of predictive text, a predictive keyboard displayed the expected characters, allowing the user to preempt a mistake rather than fixing it after the fact.
Another interesting way of looking at this solution is how the content is separated from the interface. No screen real estate is sacrificed for “New Tab” or “Back” buttons, each channel of communication perfectly tailored for the purpose it serves best. It takes what was a maddeningly soft interface and makes it hard.
So where does this idea go? How can every square millimeter of a phone both input and output information to the user? How can our buttons, screens, speakers and trackballs communicate to more of our senses? How about a volume wheel that gets harder to turn the louder it goes? Lets see some comments below.
January 26, 2009
So, in designing a Mozilla branded phone, half the equation is how Mozilla figures into things. We must understand its significance to users then communicate that across disciplines and into the phone’s physical embodiment. In approaching this problem, I had very little to go on beyond my own 4 years of personal experience using Firefox and Thunderbird. Mozilla doesn’t buy airtime and make abstract expressions of their identity and I doubt they’ve ever paid an agency to put together a mood board.
That leaves us with the combined insight of its users in defining what Mozilla and its subordinate products mean to the phone. What will make it a Mozilla Phone? How will the Mozilla Phone make you feel? What philosophies of user interaction can be ported over? Lets see some comments after the jump and I’ll give you my impression of the overall brand impact of Mozilla.
January 26, 2009
In what was the most relevant comment I received about my last project, Ron Brinkman linked me to his great discussion on non-euclidean display modes. In the small, enclosed mobile display space, this technology seems to deserve more attention than it’s getting. Up to now, the biggest innovation in information display has been zoom and pan, perfected by the iPhone and copied by the Palm Pre with the multi-touch pinch-to-zoom feature. But why should we restrict the geometry of how we manipulate the information? (Having worked on the Nike Hindsight concept, I’m no stranger to the benefits of visual distortion)
Using the Vimeo vid above as a starting point, where could you take this in the physical domain? Are there more sophisticated and appropriate means of mathematically distorting the information, e.g. spherical vs. hyperbolic? Could the screen extend around curved edges onto the back? What about a static point of enlargement that you pan the webpage into? Gimme some insights people.