Wednesday 2 April 2014

RIP Windows XP: The story behind ‘Bliss,’ the most iconic wallpaper of all time


Bliss, original photo by Chuck O'Rear


The photo you see above — the default wallpaper for Windows XP — is probably the most recognizable image in the world. What you probably didn’t know is that it’s a real photo, called Bliss, taken by Charles “Chuck” O’Rear in 1996, and then sold to Microsoft for an undisclosed sum (but it was apparently one of the highest prices ever paid for a single photo). It seems fitting that, on April 1, a week before the April 8 retirement of Windows XP, we should honor technology’s most iconic image.

The massive proliferation of Windows XP, due to its massive levels of piracy (mostly in Asia) and the fact that it just won’t die, means that it’s the most popular operating system of all time. Chuck, speaking to CNet, estimates that his Bliss photo has been seen by “billions” of people worldwide. ”Recently … An American photographer was allowed to go into North Korea. One of [the photographer's images] was in some power plant, there’s a big board where two men were sitting. What’s on the screen? Bliss.” Chuck also says he once saw a photo of the White House situation room — and again, there were 10 or 15 monitors all showing his photo.

Bliss was taken way back in 1996, as Chuck was driving through northern California’s wine growing territory. He was on his way to see his then-girlfriend Daphne (now his wife), when the clouds suddenly parted over the emerald green grass, and he decided to get out of his car and shoot a couple of frames on his Mamiya RZ67 medium format film camera. Later, he would upload the photo to the Corbis stock photo library (founded and owned by Bill Gates, incidentally), whereupon Microsoft would find the image in the lead-up to Windows XP’s release in 2001.

The Windows XP default wallpaper

The Windows XP default wallpaper. Note the slightly more vibrant greens and blues.
Bliss was purchased for for an undisclosed sum, and a non-disclosure agreement prevents Chuck from giving away any of the juicy details. Chuck does stress, however, that, despite the photo’s fantastical, almost dream-like appearance, that it came straight out of the camera. Microsoft did crop the image slightly, and increased the vibrancy of the grass, but otherwise it’s straight out of the camera. Chuck even tells a funny story about how Microsoft’s own engineering team emailed him back in the mid-2000s, saying that most of them thought them thought the image was Photoshopped. ”Sorry guys, you’re all wrong,” he says. “It’s the real deal, it’s near where I live, and what you see is what you get. It has not been touched.”

The Bliss photo, side by side with the same location but in the autumn
The Bliss photo, side-by-side with the same location — but after the season has ended

As you may already know, Windows XP will finally be retired on April 8 2014 — almost 13 years after it was first released. That doesn’t mean that Bliss will die out, though — XP has been installed on billions of PCs, and to this day it’s still installed on around 25-30% of all internet connected PCs (and probably a lot more non-connected PCs in developing parts of the world). Judging by how slow its market share is dropping, it’s fairly safe to assume that Windows XP is still being actively installed in some parts of the world, too. Microsoft, for its part, is offering Windows XP users a $100 discount if they buy a new PC — but of course, the trade off is that you’ll then have to use Windows 8… and Windows 8 is one of the reasons people are still using Windows XP.

Nvidia GTC: The GPU has come of age for general-purpose computing


Titan supercomputer featuring GPU along with CPU

Traditionally, powerful graphics processors have been useful mostly to gamers looking for realistic experiences along with engineers and creatives needing 3D modeling functionality. From spending a few days at this year’s Nvidia Global Tech Conference (GTC) it is very clear that the uses for GPUs have exploded — they have become an essential element in dozens of computing domains. As one attendee suggested to me, GPUs could now be better described as application co-processors.

GPUs are a natural fit for compute-intensive applications because they can process hundreds or even thousands of pieces of data at the same time. Modern GPUs can have several thousand reduced-instruction-set cores that can operate in groups across large amounts of data in parallel. Nvidia’s release of its CUDA (Compute Unified Device Architecture) SDK in 2007 helped usher in an era of explosive growth for general-purpose programming on GPUs (often referred to as GPGPU).

Imaging and vision have a lot in common with graphics

GPU accelerated applications leverage both the GPU and the CPU

Two of the most active markets for GPGPU computing are image processing and computer vision. Much like computer graphics, they both require running algorithms over potentially millions of elements in realtime — exactly what a GPU is designed to do well. One of the most amazing demonstrations of the power even a mobile GPU can bring to bear on computer vision is Google’s Project Tango. With only a tricked-out smartphone, Tango records over 250,000 3D measurements each second, using them to create a highly-accurate map of the surrounding building — including rooms, furniture and stairwells — as the user walks around. To do that, it uses not just a state-of-the art Nvidia mobile GPU — which project lead Johnny Lee points out has more computing horsepower than the DARPA-challenge-winning autonomous vehicle from 2005 — but two custom vision processors.

To give you a sense of how fast a GPU can accomplish tasks using parallel processing, Mythbusters did this amusing and instructive demonstration for Nvidia:


Big data is a lot like big graphics

Delphi showed off a prototype user-customizable car dashboard where the power of the GPU allowed drivers to theme their car's display

It didn’t take long for the big data craze to tap into GPU horsepower either. For example, startup Map-D has found a way to use GPUs and their memory to implement an ultra-high-speed SQL-compatible database. This allows it to analyze nearly unlimited amounts of data in near-realtime. One of their eye-opening demos is an interactive data browser of tweets worldwide. The system allows realtime analysis of one billion tweets using eight Nvidia Tesla boards on a server. Map-D won the Emerging Company Showcase at GTC with its cool demos, but it wasn’t the only startup showcasing the use of GPUs for big data hacking. Brytlyt is using Nvidia GPU cards to run queries that it says would take Google’s BigQuery 30 years in just six minutes. Brytlyt’s software will enable large retailers to do better interactive promotions and targeted marketing by allowing them to react quickly to customer location and actual purchases.

Global Valuation uses GPUs to tame big data in the back office of financial firms. It is tackling the esoteric job of risk management for firms holding massive, interconnected, portfolios of derivative securities. Apparently — despite the lessons learned in the financial markets meltdown — current risk management tools (even running on 30,000 CPU cores) can only run a fraction of the scenarios needed to accurately evaluate risk. They’re also much too slow to run in realtime before trades are made — leaving companies exposed during the trading day. Running in GPU memory, Global Valuation says it can process 100,000 interconnected scenarios in under a second — fast enough to double check a company’s portfolio risk before each new trade is made.


Samsung launches 28-inch 4K billion-color UD590 monitor for just $700


Samsung UD590 4K monitor, from the front


Time to get that second graphics cards, guys: Samsung has announced that its 28-inch 4K UD590 monitor will soon be available in the US for the paltry sum of $700. The UD590 packs a 28-inch 3840×2160 TN panel (157 PPI), capable of displaying 10-bit color (1 billion colors) at 60Hz with 1ms GTG response time. Priced at $700, this is probably your best bet if you want to try your hand at 4K gaming — or, if you’re a creative of some kind, the idea of 10-bit color on a 4K display should be very, very alluring.

From what we can tell, the UD590 has been available as a gray import from South Korea for a month or two — but now Samsung is preparing to launch it officially in the USA. Design-wise, the UD590 has a very minimal stand and bezel that’s finished in silver and black — in my opinion it’s much more attractive than the Dell P2815Q, the other $700 4K monitor that’s currently on the market. There are no DVI connectors, two HDMI connectors, and a single DisplayPort connector (which is what you need to use, if you want 3820×2160 @ 60Hz).
Samsung UD590 4K monitor, bezel detail
Samsung UD590 4K monitor, bezel detail (click to zoom in)

But enough about the minutiae — let’s get down to nitty-gritty here. In particular, that 10-bit TN (twisted nematic) panel. As you may know, in today’s market, there are two primary underlying technologies used in LCD displays: TN and IPS (including Samsung’s PLS). TN is cheap and fast, but IPS has wider viewing angles and generally better color fidelity. Personally, I have never heard of a 10-bit TN panel before; usually, if you want 10-bit color (10 bits per pixel, or bpp) you have to spend a lot of money ($1000+) on a professional-grade IPS monitor. To think that you can get a 10-bit 4K monitor for $700 makes me a little dizzy. (Read: No, TV makers, 4K and UHD are not the same thing.)
Samsung UD590 4K monitor side profile Unfortunately, even though the UD590 has been available as a gray import, none of the reviews online appear to mention the monitor’s color fidelity/image quality. The official Samsung specs don’t offer much in the way of guidance, except to say that it can display 1.07 billion colors, that it has 1-millisecond gray-to-gray (GTG) response time, standard 300 cd/mbrightness and 1000:1 contrast (or 10,000,000:1 dynamic contrast, if you prefer), and that there’s the usual poor viewing angles associated with TN panels.

So, for $700 you can get your hands on a 28-inch 4K desktop monitor. 28 inches is a little too large for a normal home or office setup, especially if you have multiple monitors, but it’s workable. We’ll have to wait and see about the UD590′s image quality, but we pray that Samsung’s claim of 10-bit color isn’t some kind of horrendous half truth (“er, it’s 6-bit, with four extra bits that we can kinda sometimes use on leap years”). The question is, though, is it really the right time to buy a 4K monitor? Gaming at 4K at a decent frame rate is still a bit unreachable, even with a dual-Titan setup. 4K makes a lot of sense for professional designers and photographers — but for them, buying a TN panel with poor image quality is a fate worse than death.
I think the only solution is that I’m going to have to buy the UD590 and a few Nvidia Titan graphics cards, and report back. I owe it to you guys.

Apple wants to bypass net neutrality for its own streaming video service


Apple TV

Apple has been toying with the idea of modernizing the living room for the better part of a decade now, and it seems as if Cupertino is on the cusp of a breakthrough. Apple is reportedly in talks with Comcast to bring a modern video streaming service to its set-top boxes, and it doesn’t want to compete with the congestion of the public internet. If this deal progresses, we could even see Comcast and Apple sidestep net neutrality all together.

According to a recent article in the Wall Street Journal, Apple and Comcast are currently working out the details that would enable a deeply intertwined business partnership. Supposedly, Apple wants to offer on-demand video and live broadcasts over IP without any of the buffering or dips in quality that services like Netflix and Amazon Prime are afflicted with. To execute on that vision, Cupertino wants to implement a dedicated service with America’s largest cable provider, and bypass the internet completely.

Comcast Apple TV

Fundamentally, this strategy treats the proposed streaming service more like a traditional cable set-up than a Netflix competitor. Sure, the data is being delivered in packets, but it just sounds like a modernized implementation of what already exists. The article makes it very clear that Apple “isn’t asking for its traffic to be prioritized over other Internet-based services,” so concerns over net neutrality become more complex. This isn’t a direct attack on net neutrality, but it does blur the lines between content company and utility company.

At what point will this private video distribution be seen as anti-competitive? What safeguards will be put in place to prevent Apple and Comcast from eroding available bandwidth from internet traffic to bolster their private services in the future? All of this is still up in the air, and these questions won’t be answered any time soon. The FCC is slowly working towards implementing net neutrality regulations, but companies like Verizon, AT&T, and Comcast are doing their damnedest to prevent progress.

I would love to see how an Apple-designed video streaming solution would work, but not at the expense of the internet at large. Private IP networks could potentially be useful tools for content providers, but only if the internet itself is protected. It’d be really nice if Mythbusters didn’t need to buffer, but that’s not worth eroding the public internet. Even if this Apple-Comcast deal never happens, we need to remain on alert. The big ISPs have a history of bad behavior, and that isn’t going to change without severe regulatory pressure.

Facebook details its plans to bring drone internet access to the masses – but will monopolistic telcos stand idly by?


Titan aerospace


Earlier this month, Facebook announced that it was developing its own drone-based plan for global internet coverage, to compete against the likes of Google’s balloon-based Project Loon. On Friday, Zuckerberg unveiled a more detailed paper on that proposal, discussing why the company believes that drones are a better technology than balloons, what it hopes to accomplish, and where it believes the market will go in the future.

Like Project Loon, Dronebook (not an actual product name) is designed to solve the problem of limited internet access across the globe. The existing map of internet coverage looks like this:
Internet coverage
If you’re in the business of getting people online and into your own service network, this is something of a problem. Two thirds of the world’s population remains off the grid and the challenges of wiring these spaces are enormous. Thanks to low population densities, impoverished citizens, challenging terrain, or significant levels of sociopolitical unrest, there are many areas of the world without a realistic plan for deploying internet access in the near future.
Facebook wants to change that, and it’s betting that drones can do a better job. The key arguments from Zuckerberg’s whitepaper are:
  • Solar-powered drones can remain in the air for much longer periods than their balloon counterparts.
  • Unlike balloons, which drift on the wind with limited controls, drones can remain directly over a specific city or area.
  • Unlike balloons, drones can be easily serviced and returned to flight.
In most other respects, Project Loon and FB’s drone project are similar. They target the same atmospheric height and they try to solve the same problem — tossing cheap, regional slices of internet access down from the heavens rather than relying on vastly more expensive satellites to do the trick.

Of profits and censorship

There are two major flaws that neither Google nor Facebook have addressed to date and they’ve got nothing to do with the blue sky research either company is conducting. First, there’s the very real question of how the telecommunications industry is likely to react to the widespread deployment of either technology. Right now, the likes of AT&T, Time Warner, and Comcast don’t care much about satellite providers because satellite internet is a miserable experience that no one in their right mind would ever purchase. With a round-trip latency of 1000-1500ms and sharp restrictions on monthly bandwidth, satellite internet is the internet of last resort — and the cable companies and telcos know it. Proposed systems that would substantially reduce the massive latency of satellite internet access remain untested.

A Google balloon or Facebook drone capable of throwing WiFi signals across an entire city or town is exactly the kind of threat that these companies wouldn’t take kindly to — particularly if FB or Google provided the service for free or at a sharply reduced rate. Expect a serious fight on this front if Google or FB moves towards making these projects a reality; high altitude WiFi would undercut the entire business model cellular networks depend on, and these companies do not play fair when it comes to writing laws that favor their own solutions at the expense of everyone else.

The second significant challenge to the idea of aerial internet is that there are plenty of governments in the world with zero interest in allowing unrestricted access to the internet — including many of the areas that most need the kind of projects Google and FB are proposing. Even governments that don’t explicitly keep citizens in the dark as part of a general policy of non-communication, like North Korea, aren’t likely to be thrilled with Google and Facebook beaming uncensored internet linkages to their cities from the skies. Bringing this technology to remote parts of the world is going to mean playing by the rules of nations that aren’t necessarily friendly to the unrestricted flow of information.
Finally, as we’ve discussed before, these projects aren’t philanthropic endeavors — at least, not entirely. No matter how noble the aims of both Google and Facebook, a big part of this effort is aimed at getting people online and into their own service networks. If both companies push ahead with their respective plans, it could open up an entire new vista of televised network entertainment: Balloons versus Drones — Aerial Combat at 60,000 Feet.

ET deals: Dell Optiplex desktop and 24-inch monitor for $980


Optiplex


Professional computer setups can often get expensive, and it’s not every day we see a powerful desktop and monitor bundle dip under $1000. So we were pretty excited to see a well spec’d Optiplex desktop and 24-inch UltraSharp monitor hit $980 this week, making it a great value for some pro-level hardware.

This Optiplex 7010 comes equipped with a quad-core Core i7-3770 processor, Radeon HD 747 dedicated graphics and Windows 7 Pro, so you’ll have the power and tools to zip through even the more demanding tasks. The bundled UltraSharp U2412M features a 24-inch 1920×1200 16:10 IPS anti-glare display with wide 178-degree viewing angles, giving you a sharp image with plenty of screen real estate for your work. Together they make a sweet combo that should be a good fit for almost any need.

This mini tower desktop also includes 4GB RAM and a 500GB hard drive, which is enough for most uses and can easily be upgraded down the line. 802.11n WiFi is built-in, and you’ll find a total of 10 USB ports (four USB 3.0) to hook up your peripherals, as well as two DisplayPorts, VGA, a 19-in-1 card reader, and more.

dell-ultrasharp-u2412m-24-inch-ips-monitor-rotated

The U2412M comes with a wide array of ports, packing DVI, DisplayPort, VGA, and a USB hub. It comes on a height-adjustable stand that is easy to tilt or swivel, and it’s also VESA mountable. No matter where or how you want to hook it up, Dell makes it easy here.

One of the best parts of opting for business class is the support that comes with it, and Dell includes a 3-year warranty on both of these items. That includes on site service for the desktop, and an exchange policy for the monitor if so much as a single bright pixel is found. Right now with some nice instant savings and two of our limited time coupons, you can snag this bundle for 41% off the typical price.

Tuesday 1 April 2014

DoD: To conquer nations and budgets, combat must go totally autonomous


The Pirate Bay UAV drone

In a turn of phrase that seems designed to provoke headlines, the US Department of Defense this week said one of its primary goals is to “take the ‘man’ out of unmanned” combat. This quote and much more comes from the latest in the Department’s ongoing series of Roadmap to the Future reports, which seek to lay out both the current realities and future plans of the US military and defense industry. This time, the topic was ripped straight from the headlines: remote combat systems.
While the American military has for a long time remained static in terms of overall manpower, one type of recruit it just can’t seem to get enough of is drone pilots. It’s not just the US, either; in the UK, they’re so desperate to meet their need for highly skilled cyber-warriors that they recently threw out the physical fitness requirements for those positions. However, as much potential as there is for an unmanned future, the most recent update (PDF) on unmanned systems policy shows that it’s autonomy that really interests the DoD.

The word "drone" does not technically denote an aerial system.
The word “drone” does not necessarily have to denote an aerial system. Naval drones are also a major area of interest.

And why not? After all, while drone pilots are far less likely to require long-term medical care than a soldier in the field, paying and feeding troops (not to mention taking care of their pensions) is still one of the most expensive aspects of running a military. Additionally, the precision of computerized war brings the frailty of the human element into sharp relief; Britain recently threw up its hands in frustration when it lost 12 of the 26 British drones deployed in Afghanistan, many due to pilot error.
Additionally, exposés like the Collateral Murder video that brought Wikileaks to prominence have stirred up significant criticism for the program. A computer might not shoot at the wrong time, and if it does it will not need therapy afterward. From a purely utilitarian perspective, why not cut the pilots out altogether, if we can? To this question, the US Department of Defense has no answer.

This report looks up to 25 years into the future, beginning by pointing out that the only true autonomy in the US military today is designed to take over during an emergency like a lost connection to control. At most, an autopilot executes a very limited set of instructions under close supervision — say, to fly in a circle over a particular stretch of Pakistani desert and report any movement. Real autonomy, says DoD, would involve recording, playback, projection, and parsing of data, followed by delivery of “actionable” intelligence to the appropriate overseer. For an autonomous combat robot, direct mention of which is mostly avoided in this document, the requirements would be even stricter.
One of the only mentions of kill-bots is a reference to the DoD’s official kill-bot policy, DoD 

Directive 3000.09 (pdf). This lays out only a few concrete rules beyond basically requiring them to be rigorously tested, though it does make sure to point out that robots should not start indiscriminately killing civilians upon losing a connection to command. Interestingly, all legal language is phrased in relation to a hypothetical human overseer; it’s the humans who launch the robots that are bound by the treaties and the generally agreed upon rules of war, not the robots themselves. This is essentially a “guns don’t kill people” sort of idea, but if a gun is incapable of taking responsibility for an action then perhaps the gun should be restricted from taking that action at all.

Tiny robots have been helping clear explosives and blind corners for years -- but they still need human drivers on the ground.
Robots have been helping clear explosives and blind corners for years — but they still need human drivers.

In the end, this comes down to budget constraints. Under the original rules of sequestration DoD faced up to $500 billion in cuts over the next 10 years, and even with new reforms it could face cuts of as much as $50 billion in 2014 alone. Still, it’s not sequestration that seems to be driving this push for autonomy, but a more general implication that manpower is a the bottleneck in, at this point, too many efficiency reports. This report readily admits that a set of algorithms with human-level versatility is but a pipe dream today, but takes it as a foregone conclusion that there is no way to both increase global dominance and decrease spending without significant cuts to (and replacement of) manpower.

“One of the largest cost drivers in the budget of DoD,” it says, “is manpower… Therefore, of utmost importance for DoD is increased system, sensor, and analytical automation…” Though automated drones will certainly cut away at the need for regular soldiers in the numbers seen today, the primary short-term target of these austerity measures is the drone programs themselves. If unmanned systems are about to become the order of the day, then DoD wants to shrink the teams necessary to direct them — preferably to as near zero as possible.

Photoshop CC showing 3D-painted image of Buddha image

3D printing is one of the most powerful new tools in the arsenal of many creatives, so it was only a matter of time before Adobe added support for it to Photoshop. You can get a sense of the capability from our coverage of the initial announcement, but since then I’ve been able to go hands-on and create and print — a small, in-color 3D statue using Photoshop CC and the Shapeways printing service. The process was a little trickier than I anticipated, but the statue came out quite nicely.

Creating your model

Initial 3D Buddha model from Thingiverse, imported into Solidworks

The process begins with a 3D model of the object you want to print. Photoshop has support for importing 3D objects, and creating textures on those objects, but it is not really a true 3D modeling tool. So you’re likely to start your project by getting a model from an online site, although you could use one or more of the simple object samples provided with Photoshop. You could also use a tool like the free Sketchup app from Google or the open-source Blender to create your model. Professionals may be willing to pay up for a high-end tool like Solidworks. In my case, I decided to use the same Thingiverse model of a Buddha that our sister site PC Magazine has used to test 3D printers. The model is monochrome, so this being a Photoshop project, the first thing I had to do was give it some color by painting it.

3D Painting

Photoshop’s 3D painting tools may be unfamiliar to many Photoshop users who have only used the program for images. They are much more sophisticated than Photoshop’s traditional image painting tools. Its 3D painting model not only incorporates the texture of the underlying material in how colors are applied, but also lets you set the way paint is applied. Typically you’ll be painting on what Photoshop calls the Diffuse surface, but you can also paint Specular highlights or change the Roughness of the image, for example. The paintbrush tools also have settings for how the paint falls off as the surface curves away from where you are painting. In essence, you can model many of the physical properties of a paintbrush on a 3D surface to create highly realistic objects.

Discovering lighting the hard way

You need to be careful of lighting effects when you go to print -- the print sub-system removes your lights changing the look of your piece

Since almost all 3D printing is done in single color materials, there typically hasn’t been any need to worry about either color or tonal values. As a result, 3D models tend to be lit in a way that makes them pleasing to view online — with the lighting completely ignored when the object is printed. However, since my goal was to print in color, the print driver had to decide how to handle the lighting in the model. I received a nasty surprise when my print preview image was almost black — the driver had literally turned off the lights. Even with Adobe’s help, there didn’t seem to be a way to change that behavior, so I needed to lower the intensity of the lights in Photoshop and completely repaint the image. I would have thought that a simple Curves layer would have accomplished the same thing, but it doesn’t work that way when you’re doing 3D printing.


The process of printing



The actual process of ordering a printed version of my statue was really simple. I had to set up an account on Shapeways, of course, but everything else was point-and-click. One nice feature is that you can rescale your object right from the print dialog. That way, if you are on a limited budget and need to make your item smaller, or if you just never bothered to accurately dimension it in the first place, you can rescale it as needed. To keep it within my $100, I shrunk the Buddha down until it was about three inches tall.

3D printed Buddha by David Cardinal
As long as you’re happy with the model once you’ve printed it, you’re all set. However, one drawback of using Photoshop to work on 3D models is that they can no longer be exported back to their native format. You can save them as PSD or TIFF files from Photoshop, and keep all the 3D information, but not save them out in a model format like STL.

Using a 3D printer as a copying machine

3D Buddha scanned back in using 123D Catch

Once I had successfully printed my Buddha, I decided to push the envelope by seeing how well I could build a model of my printed Buddha that would allow me to copy it — essentially using my computer plus a 3D printing service as an object copying machine. Obviously, since my Buddha was a model I could just reprint it, but I wanted to see if I could use 123D Catch from Autodesk to create a 3D model of it as a demonstration of making copies of small — or perhaps scale models of large — objects. Catch allows you to take photographs of an object from all different angles and then merge them into a 3D model. The application is still pretty glitchy, but with perseverance I was able to upload images and generate a model.

My first attempt to simply walk around the Buddha clicking away yielded terrible results. The stitching was off and the model had parts in the wrong places. It’s pretty clear that you couldn’t use this technology to snap away at a statue in a museum, for example, and build your own scale model of it. When I repeated the process, but this time placing my camera in a fixed position and slowly rotating the statue — all against a solid color background — the results were much better. Everything except the top of its head looked great. However, when I tried to fill in the missing pieces with additional shots from above, Catch got very confused. Using a camera and 3D printer to copy objects is clearly still in its infancy, at least with consumer products.

The bottom line

If you’re already a Photoshop user, and partake in 3D printing, the new capabilities in Photoshop CC will make your life a lot easier. In particular, if you’re using a personal printer that requires you to add your own supports to your models before printing, that one feature alone will save you time and the expense of failed prints. If you’re lucky enough to have access to a color 3D printer, then Photoshop adds even more value, as you can use its extensive painting tools on your items. However, since you can’t actually do full 3D modeling in Photoshop, unless you only download models from the web, it won’t replace a 3D modeling tool like Blender, SketchUp or Solidworks in your workflow.


Mozilla embarks on noble mission to speed up the web by bringing JPEG into the 21st century


Firefox logo, intentionally low-quality JPEG


As you probably know, images — in particular JPEGs — make up the vast majority of a web page’s overall size. The other elements — text, stylesheets, scripts — usually account for just a few percent of the total page size. When you load a modern news website like ExtremeTech, it’s not unusual for a single page to consist of a few megabytes of data, most of which are images. If the file size of images could be reduced by just a few percent, huge speed gains and bandwidth savings could be realized — for home and office surfers, but more importantly for people on woefully constrained and metered mobile data connections. Mozilla’s latest effort, mozjpeg, aims to do that just, by reducing the size of JPEGs by 10% or more.

The JPEG (Joint Photographic Experts Group) file format has been around since 1992. It wasn’t originally designed for the web (the World Wide Web as we know it didn’t exist until 1993, and didn’t become truly popular until the late ’90s), but it quickly became the de facto standard for web images due to its small file size and acceptably shitty quality. There has been some competition throughout the ages – GIFs, which date back to 1987, were popular in the early days of CompuServe and web, and PNGs, which were developed as an alternative to GIFs in 1996, have done okay — but really, JPEG’s popularity has never significantly waned.

The problem is, JPEG is old. The standard has remained virtually unchanged in over 20 years, despite the fact that the state of the art for compression algorithms is now much more advanced. If JPEG had been designed today, the file size would probably be half that of current JPEGs, while retaining the same image quality. Of course, various groups have tried to introduce new, more efficient file formats — such as Google’s WebP — but they’ve always been hamstrung by a lack of support. Say what you like about JPEG’s shortcomings, but the fact that just about every device and browser in the world can display JPEGs is a huge reason for its continuing reign. What good is a new file format if you can only view it in Chrome? Furthermore, why would a website developer ever use a file format that only 30% of his audience can view? Unseating an incumbent technology is hard.

What you can do, though, is tweak the JPEG compression algorithm slightly. By being clever, you make file sizes a bit smaller, while still retaining compatibility with those billions of JPEG-rendering devices and browsers. Enter mozjpeg. “We wondered if JPEG encoders have really reached their full compression potential after 20+ years,” Josh Aas says on the Mozilla Research blog. “We talked to a number of engineers, and concluded that the answer is ‘no,’ even within the constraints of strong compatibility requirements.”
Average total transfer size, for the top 100 websites
Average total transfer size, for the top 100 websites

Total image data per web page, from the top 100 websites.

Average image transfer size, for the top 100 websites. As you can see, it’s around 60% of the website’s total transfer size.

Version 1.0 of mozjpeg is a fork of libjpeg-turbo (a popular open-source JPEG library), with Loren Merritt’s jpgcrush functionality built in. Without affecting compatibility, if you use mozjpeg to create your images, you should be able to reduce JPEG file size by a full 10%. If you consider that the average web page has around 1MB of images on it — and that figure is growing by 2-3% every month, thanks to faster internet connections and high-res displays — then a 10% reduction is huge. Over a month, if you primarily use your smartphone for surfing websites, a 10% reduction in JPEG size could equate to hundreds of megabytes saved.

To make this a reality, image editors — like Photoshop, Gimp, and Fireworks — need to implement this new mozjpeg library. That will take time, but it’s much more realistic than getting every browser to support WebP or another alternate image standard. Mozilla isn’t stopping at a 10% reduction in file size, too — using trellis quantization, and perhaps even more advanced methods, it should be possible to squeeze good ol’ humble JPEG by a few more percentage points.

Sony, with PlayStation Now and Project Morpheus, is our best bet for the future of video games


 PlayStation 9

Last year, Sony released the most powerful game console ever created at a price point lower than the competition. At the beginning of this year, the company announced that it would release a Netflix-style game streaming service sometime this summer. Last night, Sony announced its own virtual reality gaming headset, called Project Morpheus. Now, more than anyone, Sony is leading the way to the future of video games.

The gaming industry — and consumers — have always had two main goals for the medium of video games. The first and more easily achievable goal has always and forever been to make playing a game feel like you’re playing a movie. This goal was arguably achieved back on the PlayStation 3 when Naughty Dog released Uncharted 2: Among Thieves. Regardless of how you feel about the game itself — third-person cover-based shooting, ledge-climbing, hidden collectibles — nothing to date has felt more like fluidly playing through a summer blockbuster. A few years later, the developer showed that it could create a similar movie experience with other genres when The Last of Us felt like playing through an Oscar-nominated drama. All video games shouldn’t aim for this goal, but the goal has always been hanging in the air.

The second and likely more desired goal of video games is just over the horizon: virtual reality. Though virtual reality headsets and peripherals have been around for ages — ask anyone who has been to a fancy arcade, such as Disney Quest in Orlando — they haven’t truly been integrated with consumer-grade products. When the Oculus Rift was funded on Kickstarter in September of 2012 and backers started receiving their development kits, people realized that, actually, these VR headsets aren’t particularly complicated and could now be integrated into consumer products. It’s been a year-and-a-half since the Rift was funded, and it’s still only a dev kit that’s a chore to get working with anything. This doesn’t mean the Rift is a poor device, it just means that a Kickstarter-backed indie developer doesn’t have the manpower or funding to quickly produce and finalize advanced hardware. Sony caught onto the Rift’s popularity and announced its own virtual reality gaming headset, Project Morpheus.
Project Morpheus
Sony’s Project Morpheus.

As our own Grant Brunner noted, Sony just made virtual reality mainstream. So far, we only know a little about the Morpheus. It has a head-mounted 1080p display with a 90-degree field of view, and works in conjunction with accelerometers, gyroscopes, and the PlayStation Camera to deliver a precise VR experience. Both the DualShock 4 and PS Move can be used as controllers; one of the biggest issues with the Rift is that you’re stuck facing the direction of your keyboard and mouse, so it’s tough to spin around and still have control of your game. The Morpheus also allows you to wear your glasses, unlike the current iteration of the Rift. We don’t yet know the price, time frame of release, or even how deep Sony plans to integrate it into the PS4. What we do know, though, is that Sony is the first major game company to head toward video game virtual reality, and that’s still not the only future the company is working toward.

If PlayStation Now is still on track for its summer release, Sony is set to change the way we access games. Netflix already changed the way we accessed our movies and television, and PlayStation Now is aiming to do the same. It’ll begin with on demand, streaming PS3 games, which was surprising when announced, because many of us felt the beginning of game streaming would only be able to handle much older titles, not games that were brand new less than a year ago.

Of course, there are reservations. We can’t judge the quality of PlayStation Now until it publicly releases, because even an open beta won’t be dealing with the largest audience possible. It’s also currently impossible to make any official judgment regarding the Morpheus considering all we can really assume is that it’ll be a Sony-style Oculus Rift that only works with the PS4. However, what we can say is that the very attempt at a combination of modern game streaming, virtual reality, and the most powerful games machine on the market, Sony is at the forefront of advancing video games to the goals gamers have dreamed about since the medium’s mainstream inception. We may not have a Matrix- or .hack-like level of immersion just yet, but virtual reality headsets and data streaming are the closest we’ll have come yet, and the first stepping stone toward that level of immersion. Perhaps that’s why Sony went with the name “Morpheus,” the character that shepherded Neo through virtual reality.

What is mesh networking, and why Apple’s adoption in iOS 7 could change the world


The network topology of the internet has been likened to a jellyfish


With iOS 7, Apple snuck in a very interesting feature that has mostly gone unnoticed: Mesh networking for both WiFi and Bluetooth. It also seems that Google is working to add mesh networking to Android, too. When it comes to ubiquitous connectivity, mobile computing, and the growing interest in the internet of things, it is not hyperbolic to say that mesh networking could change the fabric of society. But, I hear you ask, what is mesh networking? I’m glad you asked.

What is mesh networking?

 

A star topology network
A star topology network. Imagine your home’s WiFi router in the middle, with all of your devices around the outside.

One of the most important factors when discussing networking is topology. In basic terms, the topology describes how the various members (nodes) of a network are connected together. Most small networks (your office, your home) use a star topology, with a central node (a switch/router) connected to a bunch of clients (your laptop, smartphone, Xbox, etc.) The star topology dictates that if one client wants to talk to another (say, you want to send a photo from your laptop to your Xbox), the data must go through the central point (the router).

The internet, in case you’re wondering, because it’s such a mess of different networks, is hard to label as a single topology. One proposal says the internet has a jellyfish topology, with a very densely connected core (backbone links between data centers), and long tendrils that represent the sparsely connected ISPs and last-mile connections. The image at the top of the story shows a map of the internet that supports the jellyfish concept.
A mesh topology is where each node in the network is connected to every other node around it. So, if you take the home network star topology, but then allow the smartphone, laptop, and Xbox to talk directly to each other, you have a mesh topology.

A fully connected mesh (left) and a partially connected mesh (right)
A fully connected mesh topology (left) and a partially connected mesh topology (right). Even in the partially connected mesh, each device can communicate with each other.

Why should you be excited about mesh networking?

The key reason for mesh networking being exciting is that it doesn’t require centralized infrastructure. If you turn off your WiFi router, chances are your entire home network would cease to work. If you had a mesh network instead, everything would continue to work just fine (assuming they’re still within range of each other, anyway). If you’ve used Miracast/WiDi to stream video directly from your smartphone/laptop to your TV, then you’ve already dabbled in mesh networking.
And so we finally get to iOS 7′s mesh networking capabilities, which Apple refers to as Multipeer Connectivity. Google hasn’t said a whole lot about its mesh networking efforts, though Sundar Pichai did mention it a couple of times at SXSW last week, in relation to its Android Wear and home automation efforts. (Read: Ford working on car-to-car wireless mesh network for real-time telemetry, government use.)

With Multipeer Connectivity, iOS 7 can communicate to other iOS 7 devices without a centralized hub (WiFi router, cellular base station). If you’ve used AirDrop, you’ve probably used Multipeer Connectivity. Other than AirDrop, though, this functionality has gone mostly unused — until an app called FireChat hit the App Store this week.
AirDrop 
FireChat is basically an app that lets you chat with other FireChat/iOS 7 users. The key difference, though, is that FireChat is fully decentralized and peer-to-peer — so, if you have two iPhones that are in Bluetooth or WiFi range of each other, they can communicate directly, without sending any data through a WiFi router or the internet. This is obviously rather useful, if you want to communicate privately, or want to transfer sensitive data.

Mesh networking is a game-changer

What’s interesting, though, is that iOS 7′s Multipeer Connectivity apparently allows for the chaining of peer-to-peer connections. So, for example, if Alice is connected to Bob, and Bob is connected to Carol, Alice and Carol can send messages to each other. Apparently, according to Cult of Mac, this chain can be indefinitely long — so, you might construct a chain of 10 or 25 or 50 devices. As long as no one device goes out of WiFi range, they can all communicate with each other. Furthermore, if one of those devices has an internet connection, every other member of the mesh can share that connection. You might imagine using this to extend internet access to rural or out-of-the-way (underground) locations — but I think installing a few WiFi repeaters is probably a more graceful solution than leaving an iPhone sitting on a chair somewhere.

Still, Apple’s inclusion of mesh networking in iOS 7 is an exciting indicator of things to come. For now, it’s just AirDrop and apps like FireChat — but tomorrow, it’s easy to see how your iPhone, Apple TV, MacBook, and the other internet-of-things around your home, use mesh networking to communicate with each other. Truly decentralized networking, especially if you throw in some cryptography, is one of the most disruptive technologies that you can imagine. If mesh networking takes off and the world’s billion smartphones suddenly start chattering to each other, I guarantee that you will see some mind-blowingly killer applications in the next few years.

Ubuntu 14.04 final beta download: A much-needed upgrade for LTS users


Ubuntu 14.04 desktop, with borderless windows


The next version of the world’s preeminent Linux distro, Ubuntu 14.04 LTS, is almost upon us. Late last night, the final beta of 14.04 Trusty Tahr (an African wild goat) was released, with the final build due on April 17. Trusty Tahr is the first long-term support (LTS) build of Ubuntu in two years, and is thus contains a lot of exciting features that thousands (millions?) of Ubuntu 12.04 users can’t wait to get their hands on.

Because Trusty is an LTS, most of the changes are fairly conservative in nature. Unity 7 is still there. Mir, the new graphics stack being developed by Canonical that is due to eventually replace the X Window System, is still a long way off. Despite Canonical’s Shuttleworth saying that Ubuntu 14.04 would include the Touch/Mobile, it appears they won’t make it into the final build. (Canonical has revised its estimate for the first Ubuntu smartphones to the third quarter of 2014, so there’s still a little time to polish things up.) For the big changes, you’ll be waiting for Ubuntu 14.10 (or likely even later for Mir). (Read: Ubuntu: Wake up and smell the Unity against you.)
Borderless windows in Ubuntu 14.04
Look at those beautiful borderless windows and rounded corners!

Locally integrated menus in Ubuntu 14.04
Locally integrated menus in Ubuntu 14.04

So, what is new in Ubuntu 14.04? There is finally the option for locally integrated menus (LIM) in an app’s title bar, instead of forcing the app’s menu to appear at the top of the screen (enable it in the new Unity Control Center). There’s a new Unity lock screen. You now have the option of minimizing apps from the launcher (and launcher icons can be made much smaller, too). Windows are now completely borderless, rather than bounded by a one-pixel black line. The shift from Compiz to GTK3 means window corners are now antialiased — oh, and resizing windows in Ubuntu 14.04 now occurs in real time.

Moving down the list of importance: Ubuntu 14.04 also improves support for high-resolution displays, TRIM is enabled by default for Intel and Samsung SSDs, Nvidia Optimus support is improved, and you can pump the system volume up above 100%. All of the default applications have been updated to their latest stable versions (Firefox 28, LibreOffice 4.2.3, Nautilus 3.10.1, etc.), and it rocks Linux kernel 3.13.
This video from WepUpd8 shows most of Ubuntu 14.04′s new features, but be sure to turn your sound down before pressing play.


Overall, Ubuntu 14.04 is a surprisingly pleasant operating system. It feels very polished, especially for a Linux distro. If you’ve been using 12.04 for the last couple of years, 14.04 will feel like a sizable step up. The question, though, is whether Canonical should even be putting much time into desktop builds of Ubuntu — the desktop PC is undoubtedly on its way out, and I’m not entirely sure what role Canonical can play on other form factors. It might be able to gain some traction on TVs, but I’m fairly certain that mobile has already been sewn up tight by Android (also a Linux distro) and iOS. (I’m looking at you too, Firefox OS.)

Download Ubuntu 14.04 Trusty Tahr. For the first time, every flavor of Ubuntu 14.04 (Desktop, Server, Edubuntu, Lubuntu, etc.) has been approved for LTS status, meaning they’ll all be supported for a minimum of three years, and some of them will be supported for five.


GE introduces awesome MEMS switch tech for faster LTE-Advanced


GE MEMS switch on a dime


This week, GE Global Research announced it had developed a brand new switch technology that can be used for enhancing the radios used in 4G phones to provide much faster speeds. Called a “MEMS switch,” it promises to drastically improve the efficiency and performance of radio signals.
To understand how this improves LTE performance, we need to understand how this MEMS (microelectromechanical system) switch fits into the equation. In the radio chain (comprising of the antenna, filters/duplexers, power amplifiers, transceiver, and the baseband), there are RF switches used everywhere. RF switches are used to support multiple bands within a frequency range. For example, an antenna for 1710-2170 MHz would have switches to support 2.1GHz (IMT, 3GPP Band 1), 1.9GHz (PCS, 3GPP Band 2/25), 1.8GHz (DCS, 3GPP Band 3), and 1.7+2.1 GHz (AWS, 3GPP Band 4/10).


RF switches in radios typically leak some of the signal out, causing interference and signal quality degradation before it even leaves the phone (or in the case of received signal, before it even reaches the baseband for processing). With older technologies (like GSM, CDMA, and UMTS), this generally resulted in lower coverage (i.e. fewer bars). There was also a performance drop, but the coverage was the main issue.

With LTE, though, you suffer a lot more on performance. This is largely because there are a lot more complex radio techniques being used to improve capacity and latency, and these techniques require much more sensitive radio equipment. Consequently, that leakage that only hurt a little bit before will hurt a lot now.

Current RF switches are transistor based, meaning that they are semiconductors. GE’s MEMS switches are the closest it gets to a wire-based switch, as it is a metal/metal contact switch controlled by using electrostatic forces. As a consequence, these switches are as nearly totally lossless as it gets for switches. Also, GE’s technology is designed to have better inherent isolation for each switch within an array of switches, so that when a switch is flipped, there will not be anything leaking over from one side of the switch to the other (reducing or eliminating potential interference issues from triggering multiple conditions at once).
GE MEMS switch wafer
A wafer of GE MEMS switches

With a high degree of isolation, linearity (degree to which the component does not affect the signal being carried), and a very low degree of loss across a chain of switches (which is critical for signal transference across the radio chain in mobile devices), RF switches based on GE’s MEMS technology would be perfect for improving the base hardware that all cellular networks run on. It also helps that it’s quite small! (In the top image, the MEMS switch is shown on top of a US dime, which is 18mm across.) Low-throw-count switches (2-6 count) are comparable to the currently used technology in size (silicon on insulator, or SOI for short), but high-throw-count switches (12+) would be significantly smaller than any of its competitors. That enables smaller form factors and reduced costs on PCB construction.

The result of this improved technology is that the signal goes in and out of the device more more cleanly. As GE mentions in its announcement, less distortion and leakage leads to a cleaner signal that can be processed and used with more advanced radio techniques like higher-order MIMO and non-contiguous carrier aggregation. It also allows for more sensitive radios to be used. This is critical for LTE-Advanced, as those technologies are the core of how it offers super-fast broadband connectivity.


Now, MEMS has been around for a long time. The benefits of MEMS switches are not new. However, what is new is GE’s production process involving a metal alloy that makes it possible to use it in low-power environments like smartphones as well as higher-power environments like cell towers. This “secret sauce,” as GE Ventures’ Chris Giovanniello called it, is what makes it usable for these environments. Giovanniello further noted that since GE does not participate in the wireless industry, it will be looking to license out its technology to companies that do (such as network gear and smartphone vendors) to enable wide adoption of “GE Metal MEMS” RF switches.
As of right now, no one has announced any partnerships to use the technology, but it wouldn’t be surprising if the next generation of LTE-Advanced network gear and devices used the technology to enable much better performance in the network.




How satellites tracked down flight MH370 – but why we still can’t find the plane (updated)


MH370 search and rescue, helicopter and ship


Updated @ 11:10 March 27: Thailand’s Thaicote satellite has spotted another 300 objects in the Indian Ocean, about 200 kilometers (120 miles) south of the objects spotted by the French satellite. This new imagery was captured on March 24, one day after the French data. Earlier today, the 11 search-and-rescue aircraft were called off after just a couple of hours due to bad weather and zero visibility. We still haven’t physically located any of the objects spotted by satellites — and due to bad weather and strong currents, it may be some time until we finally track down the debris of flight MH370.
300 new objects, spotted by the Thaichote satellite
300 new objects, spotted by the Thaichote satellite

The MH370 search area, on March 27 [Image credit: BBC]
The MH370 search area, on March 27 [Image credit: BBC]

Updated @ 10:45 March 26: 122 objects, possibly debris from flight MH370, have been identified in new satellite imagery captured by the French company Airbus Defence and Space. The objects are up to 23 meters (75 feet) in length, and are spread out over an area of 400 square kilometers. Australian search-and-rescue planes today checked the areas highlighted by the satellite imagery, but left without finding anything. There is still no sign of oil slicks or floating debris that would help pinpoint the wreckage of flight MH370. As you can see in the image below, we’re searching tens of thousands of square kilometers for signs of debris — using just seven military and five civilian planes, and a few ships (but they cover a very small area, very slowly).
Suffice it to say, I would not be surprised if we never find the remains of flight MH370.

The 122 new bits of possible MH370 debris
The 122 new bits of possible MH370 debris
The original story, about how we tracked flight 370 to its crash landing in the Indian Ocean, continues below.

Yesterday morning, the Malaysian prime minister confirmed that Malaysia Airlines flight 370 crashed in the south Indian Ocean, killing all 239 people on board. Curiously, though, despite the PM’s confidence, this conclusion is based entirely on second-hand information provided by UK satellite company Inmarsat. There is still no sign of debris from MH370, and investigators still have absolutely no idea what happened after the final “All right, good night” message from the cockpit. If you’ve been following the news, you probably have two questions: How did Inmarsat narrow down MH370′s location from two very broad swaths across central Asia and the Indian Ocean, and furthermore, if we know where the plane crashed into the ocean, why haven’t we found it yet?

How Inmarsat tracked down flight MH370

After flight MH370′s communication systems were disabled (it’s still believed that they were disabled manually by the pilots, but we don’t know why), the only contact made by the plane was a series of pings to Inmarsat 4-F1, a communications satellite that orbits about 22,000 miles above the Indian Ocean.
The initial Inmarsat report, which placed MH370 along two possible arcs, was based on a fairly rudimentary analysis of ping latency. Inmarsat 4-F1 sits almost perfectly stationary above the equator, at 64 degrees east longitude. By calculating the latency of MH370′s hourly satellite pings, Inmarsat could work out how far away the plane was from the satellite — but it couldn’t say whether the plane went north or south.
Inmarsat, flight MH370 satellite communications radius
A map showing the location of Inmarsat 4-F1, which received Satcom pings from MH370, and the plane’s radius from the satellite (calculated from the “ping” round-trip time).

Inmarsat's global coverage

Inmarsat’s global coverage. The satellite that tracked flight MH370 is shown in purple.
To work out which direction was taken by flight MH370, Inmarsat, working with the UK’s Air Accidents Investigation Branch (AAIB), says it used some clever analysis of the Doppler effect. The Doppler effect describes the change in frequency (the Doppler shift) as a sound/light/radio source travels towards the listener, and then again as it moves away. The most common example is the change in frequency of a police or fire truck siren as it passes you. Radio waves, such as the pings transmitted by flight MH370, are also subject to the Doppler effect.

Basically, Inmarsat 4-F1′s longitude wobbles slightly during its orbit. This wobble, if you know what you’re looking for, creates enough variation in the Doppler shift that objects moving and north and south have slightly different frequencies. (If it didn’t wobble, the Doppler shift would be identical for both routes.) Inmarsat says that it looked at the satellite pings of other flights that have taken similar paths, and confirmed that the Doppler shift measurements for MH370′s pings show an “extraordinary matching” for the southern projected arc over the Indian Ocean. ”By yesterday [we] were able to definitively say that the plane had undoubtedly taken the southern route,” said Inmarsat’s Chris McLaughlin.

MH370, Australian satellite imagery of possible plane debris
A satellite spotted some possible debris off the coast of Australia — but by the time airplanes arrived to check out the scene, the debris had gone.

So, where is flight MH370?

At this point, if we assume that Inmarsat knows what it’s doing, we know with some certainty that flight MH370′s last satellite ping originated from around 2,500 kilometers (1,500 miles) off the west coast of Australia. Because we know how much fuel the Boeing 777 was carrying, we know that it probably ran out of fuel sometime after that last ping, crashing into the Indian Ocean. Assuming the plane was flying at around 450 knots (517 mph, 833 kph), the potential crash zone is huge.
The southern Indian Ocean is one of the most inhospitable and remote places on Earth. Its distance from major air and navy bases make it one of the worst possible places to carry out a search and rescue mission. Even if satellite imagery purports to show debris from flight 370, waves, weather, and ocean currents mean that the debris will be constantly moving. ”We’re not searching for a needle in a haystack,” said Mark Binskin, vice chief of the Australian Defence Force. “We’re still trying to define where the haystack is.”

Multiple nations are sending search-and-rescue aircraft and ships to the region to look for flight 370, and the US is deploying its Towed Pinger Locator — a device that can locate black boxes up to a depth of 20,000 feet (6,100 meters). The flight data recorder (FDR) or cockpit voice recorder (CVR) generally only have enough battery power to ping for a month or two, so time is of the essence.

What happened to flight MH370?

An airplane blackbox -- they're not actually black, incidentally


So, the million dollar question remains: What series of events led to Malaysia Airlines flight 370 ending up in the Indian Ocean?

There appear to be two likely options. The most pertinent point still seems to be that the plane’s ACARS (automated reporting system) was manually disabled. This would indicate that the plane was either hijacked, or that the ACARS had to be disabled for some other reason (a fire). It’s possible that there was some kind of disaster on-board, killing or disabling everyone, and the plane continued on auto-pilot until it ran out of fuel. It’s also possible that the plane was hijacked (perhaps by a passenger or one of the pilots), and they continued to fly the plane on some kind of suicide mission.
Neither of these explanations quite ring true, but really, given the dearth of information, it’s the best that we can do. At this point though, we should be terrified of another eventuality: Given where the plane crashed, we may never find the flight data recorder (FDR) or cockpit voice recorder (CVR) — theorizing about the fate of flight 370 might be all we can ever do.

HTC One M8 revealed: Snapdragon 801, Duo Camera, 5-inch screen, aluminum unibody


HTC One M8

The new HTC One M8 has leaked more times than any flagship phone in recent memory, but HTC still took the stage and announced the device like nothing was out of the ordinary. Even a cursory glance would tell you this is a successor to the original One with its unibody aluminum shell and humongous front-facing speakers. The HTC One certainly has the makings of a flagship phone, but does it have what it takes to compete with the Samsung Galaxy S5?

The M8 has a more rounded design than last year’s One (M7), which honestly might have been a little too angular to hold comfortably. The device has a hairline brushed finish with even more of the device composed of metal, which HTC is quite proud of. In fact, the more extensive unibody design means fewer pieces are used in the assembly process.

HTC has packed the new One with some killer hardware including a quad-core Qualcomm Snapdragon 801 clocked at 2.3GHz, 2GB of RAM, and 16/32GB of storage with a micro SD card slot. The LCD screen has been bumped up to 5 inches, but remains at 1080p resolution. That works out to 440 pixels per inch, down from 469 on last year’s One with a 4.7-inch LCD. That’s still more than enough to make the pixels all but invisible to even the most eagle eyed user. The light from the LCD is also used to light up the holes in HTC’s new DotView case. This offers a way to see notifications without waking up the entire phone (just the screen for a few seconds).
DotView

Above and below the screen are HTC’s front-facing BoomSound speakers — unequivocally better than the tinny mono speakers you’ll find on every other smartphone. The bass and clarity of BoomSound on the M7 were excellent, and HTC says the M8 is even better. It’s 25% louder this time and has a multi-band amplifier for more accurate tunes.

Around back is likely the most notable addition to the HTC One M8 — two cameras. If you’re having flashbacks to the disastrous HTC Evo 3D, just take a breath and relax. The Duo Camera system isn’t just to take stereoscopic images, but rather to attach depth information to each pixel captured by the main camera (another 4MP Ultrapixel sensor) — a lot like a mini-Kinect, actually. This allows you to (kind of) make changes to the focus of an image after you take it — the phone basically knows which pixels are further away and which are closer, so the software can apply a blur filter accordingly. It remains to be seen if this is the same kind of after-the-fact focusing promised by Pelican and Lytro, but we suspect it isn’t.

HTC is launching its flagship device with Android 4.4.2 KitKat, the current version of the platform until Google rolls out an update (probably this summer). Along with KitKat, buyers will get the new Sense 6 skin that adds HTC’s custom apps and services. The UI has been flattened out even more and the colors are a bit more vibrant. HTC is making use of the transparent status and navigation bars to introduce some more colors in its apps — and yes, there are software buttons in the navigation bar now.
Sense 6

Sense 6 includes a new version of BlinkFeed, HTC’s home screen news feed. There are more options and services built in, and the company has added the BlinkFeed Launcher to Google Play so it can be updated over time. There are also a few other HTC services popping up in the Play Store today, which is very encouraging. It’s the same thing Motorola is doing with its newer devices, and it has allowed that company to make substantial improvements without pushing a full OTA update. The M8 has a 2600 mAh battery and HTC says the improved power saving mode in Sense 6 can extend battery life by up to 40%. That’s a pretty bold claim that needs some testing.

The HTC One M8 is the company’s last stand — if the new One can’t challenge the Samsung Galaxy onslaught, HTC might not have the chance to launch another flagship smartphone. HTC is giving it a good shot, though. The new One is going to launch in more than 100 countries in the first few weeks of April, and it can be purchased through several US carriers today.

Apple’s A7 Cyclone CPU detailed: A desktop class chip that has more in common with Haswell than Krait


Apple A7 SoC


Some six months after Apple shocked the world with its 64-bit A7 SoC, which appeared in the iPhone 5S and then the iPad Air, we finally have some hard details on the Cyclone CPU’s architecture. It seems almost every tech writer was wrong about the A7: The CPU is not just a gradual evolution of its Swift predecessor — it’s an entirely different beast that’s actually more akin to a “big core” Intel or AMD CPU than a conventional “small core” CPU.

These new details come from Apple’s recent source code commits to the LLVM project. For some reason, Apple waited six months before committing the changes (the Swift core was committed very close to its release). The files clearly outline the name of the CPU’s microarchitecture (Cyclone), and all of the key details that ultimately dictate the CPU’s performance, power consumption, optimal usage scenarios, and ability to scale to higher clock speeds.
Code snippet from the LLVM, showing Apple's Cyclone core microarchitecture
Code snippet from LLVM, showing Apple’s Cyclone core microarchitecture

To begin with, Cyclone is very wide. It can decode, issue, and retire up to six instructions per clock cycle. By way of comparison, Swift and Krait (Qualcomm’s current mobile CPU core) can’t do more than three concurrent operations. There is also a massive 192-entry re-order buffer (ROB) — the same size as Haswell’s ROB (which makes sense, given they both make heavy use of OoOE (out-of-order execution).
The brand mispredict penalty goes up slightly, but interestingly there’s a range of penalties from 14 to 19 cycles — the same range as Intel’s newer CPU cores (Sandy Bridge and later).
Apple Cyclone CPU block diagram
Apple Cyclone CPU block diagram [Image credit: Anandtech]
 

On the actual number crunching side of things, Cyclone is seriously beefy: It has four FPUs (up from Swift’s two), two load/store units (up from one), two branch units (up from one), and there are three FP/NEON units. Working together with the six decoder units and 192-entry ROB, Cyclone can sustain three FP/NEON adds in parallel per clock. To accommodate all of this beastliness, Cyclone doubles the instruction and data caches to 64KB each (per core).
In short, Cyclone is a serious CPU. In the words of Anandtech, “With six decoders and nine ports to execution units, Cyclone is big… bigger than anything else that goes in a phone.” When Apple announced the A7 SoC, one of the slides said it had a “64-bit desktop-class architecture” — and now we know that wasn’t just marketing hyperbole. Where Swift was very similar to Krait and other mobile ARM cores, Cyclone is a big departure from the usual thin-and-light approach of building mobile CPUs.

Apple A7 SoC slide, showing "desktop-class" architecture
Apple A7 SoC slide, showing “desktop-class” architecture

The question, of course, is why. Much like octa-core mobile chips, there simply aren’t many mobile applications that can take advantage of a big, hot CPU core. This will change eventually, as battery tech improves and mobile computing continues to grow in popularity, but this won’t be a short-term thing.

So, perhaps a better question to ask is what’s Apple’s long-term plan for its A-series SoCs? Presumably the A8, which should debut with the iPhone 6 in September, will be big, wide, powerful, and power hungry as well. If the A8 makes the jump to 20nm at TSMC, which is likely, we can expect a clock speed bump, and other refinements that will further improve performance. It’s worth noting that, despite being a big core, Cyclone doesn’t appear to consume any more power than Intel’s Silvermont or Qualcomm’s Krait — probably because it’s clocked slower, and because its beefy performance allows it to finish tasks more quickly, and thus enter a low power state sooner — aka “race to sleep.”

Still, though, why the sudden shift towards a big core, when everyone else is still focusing on smaller cores? The only sensible answer, in my opinion, is that Apple is thinking far ahead to the future. It’s clear that more and more of our computing time is being spent on smartphones and tablets, so it stands to reason that more complex, classically desktop-oriented tasks will slowly make the jump to mobile. Imagine if Adobe released some kind of iOS app that processed massive 20-megapixel Raw images from your DSLR — suddenly, Cyclone and its successors make a lot of sense.
Or, of course, maybe Apple is eventually planning to use its A-series chips in its MacBooks as well — a possibility that I discussed way back in 2011. Apple did describe the A7 as “desktop-class” after all. Watch out, Intel!