Wednesday 7 May 2014

Scientists turn skin cells into sperm cells, but raise provocative new questions


Sperm tubes


Scientists can now reprogram cells from the skin of an adult to turn into cells that would be at home in any organ of the body. An exception to this has been that making germ cells — the egg and sperm — still requires a fully developed human. New research, just published in the journal Cell Reports, suggests that even these special cells can be created in the lab. What this really means is that the limitations to further progress are no longer scientific or technological in nature. Instead, the limits have become the problem of a new kind of engineering — ethical engineering.

Sperm

In our age, getting pregnant can often be harder (and more expensive) than avoiding pregnancy. While men shoot blanks for a variety of reasons, inherited genetic defects and chemotherapy are common ones. The ability to convert the skin cells of these men into sperm — thus reopening the doors to biologic fatherhood — would be nothing short of miraculous. But to make mature sperm cells, you need to do more than just arrange things properly inside the cell or its nucleus. In addition, you need to coddle the cells in the proper environment, which prods them continuously with formative external influences.

Spermcell

Among the developmental tricks nature uses to rough out many organs and tissues, a particular favorite is for localities of cells to coalesce and essentially hack for themselves an interior fluid space known as a lumen. The cells that line these hollow lumens, whether they are to become the ventricles of the brain, or perhaps future sensory organs or air passages, typically sport hair-like cilia that wave like sea creatures permanently affixed to the sea floor. With just a few changes to this basic plan, you get sperm cells lining the insides the seminiferous tubules in the testes, with their tails projecting into the inner space.

To sidestep the procedural risk and ethical unknowns of injecting rebuilt sperm precursors into the testes of men, the researchers did something a little different. After harvesting the men’s skin cells and transforming them into all-powerful stem cells, they injected them into the testes of mice. There was evidence this plan might work because mice have previously been bred from other mice using skin cells that have been transformed into both eggs and sperm. Using the human cells, the researchers were able to grow cells that went on to become sperm cell protégés inside the mice. They’re not quite fully developed sperm with tails, but rather, are immature apprentices to sperm with all the molecular hallmarks of potential.

These hallmarks are basically protein or nucleic acid markers that the cell produces when dedicated to any particular fate. They are often visualized using what molecular biologists refer to as a “heat map.” These maps are basically colorized matrices that indicate relative abundance of different things in the cell. Mentioning a heat map to your genetics counselor will let them know you are not one to be trifled with. It’s kind of like asking a potential mechanic if they can read U281 OBD2 codes for CAN VW Audi.

Heat map

It doesn’t require too much imagination to see potential concerns with this new capability. Bioethicists have pointed out that little would stand in the way of someone lifting a hair from say, George Clooney, and running off to the lab to make Clooney clones. It would also be conceivable that samples from those long dead and gone could even be used, perhaps a husband killed at war can still be father. Talk of establishing new human rights, rights of consent, or of criminalizing such activities may be premature without any ethical infrastructure to support them. For example, “ownership” of DNA — particularly for something like the DNA in our mitochondria — would be tough to definitively establish. We inherit these sequences from our mothers with little change, and she in turn inherited it from her mother. Many people, especially those related, share very similar sequences.
An innovative group of medical ethicists, The Academy of Medical Ethics in Bio-Innovation, have been evaluating the clinical and ethical issues this breakthrough brings to the field of medicine. The President and Director of AMEBI, Ayden Jacob, stated that ” we are witnessing how life can not only be altered via genetics and engineering, but how life can be created from the origins of matter itself. This breakthrough causes us all to reassess our definitions of the genesis of life, and has important implications for nearly every domain in medicine.”

The idea that mere rules can govern all the ethical situations that arise seems futile. Like any engineering discipline, physical instruments, structures, and protocols will need to be designed, implemented, and operated with an as yet unimagined cohesion. Properly constructed inter-dependencies among new civil, biomedical, and social tools may both limit and enable according to precedent and consensus that is newly laid seemingly as fast as it can be advertised. Included in a new ethical science then, may be rules that do not deal directly with times and places, people or events. Instead, it will deal with what we might recognize as the core problem in ethics: that of predicting and dealing with contradictions in general. A science of contradictions, in which they are not just expected, but sought, may be more efficient and practical than trying to microlegislate specific instances of them as they arise.

You can finally watch a live video feed of Earth from space, and it’s awesome

International Space Station over Libya, as seen by HDEV live stream

After being continuously inhabited for more than 13 years, it is finally possible to log into Ustream and watch the Earth spinning on its axis in glorious HD. This video feed (embedded below) comes from from four high-definition cameras, delivered by last month’s SpaceX CRS-3 resupply mission, that are attached to the outside of the International Space Station. You can open up the Ustream page at any time, and as long as it isn’t night time aboard the ISS, you’ll be treated to a beautiful view of the Earth from around 250 miles (400 km) up.
Updated @ 12:15 May 5: Unfortunately it seems the HDEV experiment has been “temporarily offline” for the last couple of days. There are some recorded clips on Ustream from last week if you want to see what the footage looks like. Just pretend that it’s real-time


This rather awesome real-time video stream (which also includes the ISS-to-mission control audio feed) comes by way of the High Definition Earth Viewing experiment. HDEV is notable because it consists of four, commercial off-the-shelf (COTS) high-definition video cameras that are each enclosed in a pressurized box, but otherwise they exposed to the rigors of space (most notably cosmic radiation). The purpose of HDEV, beyond providing us with a live stream of our own frickin’ planet, is to see if commercial cameras are viable for future space missions, potentially saving a lot of money (space cameras have historically been expensive, custom-designed things).

HDEV, which consists of just a single enclosure, was delivered to the ISS a couple of weeks ago by SpaceX CRS-3. The box was connected up to the underside of the ISS via EVA/spacewalk, with one camera pointing forward (Hitachi), two cameras facing aft (Sony/Panasonic), and one pointing nadir (Toshiba, down towards Earth). If you watch the stream you will notice that it hops between the four cameras in sequence, with gray and black color slates in between each switch. If the feed is permanently gray then HDEV is switched off — or communications have been lost. Also note that the ISS has an orbital period of just 93 minutes — for a considerable part of that time the station is in the Earth’s shadow and can’t see much.

Inside the forward-facing HDEV box
Inside the HDEV box. The Hitachi camera is in the top left, the Sony and Panasonic cameras are in the top right, and the Toshiba camera is along the bottom edge.

HDEV operations

HDEV operational diagram
The active video camera is connected to the ISS Columbus module via an Ethernet link, and then beamed down to the ground. From there, it looks like the video feed is combined with the current ISS-to-mission control audio feed, and then simply uploaded to Ustream. It’s an impressively simple (and cheap) setup.
It’s also worth mentioning that parts of HDEV were designed by American high school students through NASA’s HUNCH program. It’s good to see NASA fostering the next generation of astronauts and scientists!

ISS HDEV, Mediterranean
A very cloudy Spain, as seen from the International Space Station

ISS HDEV, Mediterranean
Just off the east coast of Spain, the Mediterranean. With the north coast of Africa in the distance I think.

International Space Station, HDEV, night time

In this photo, the International Space Station is moving into night time (pre-dawn) above Sudan in Africa

The photos in this story are screenshots from the video feed. I think they’re mostly of Spain and north Africa (the top photo is of Libya). It seems to be a pretty cloudy on Earth today, though — I watched the feed for a couple of hours and never really got a clear shot of the ground.

Bill Gates hints that Microsoft could sell off its Xbox division


The giant Xbox One in Vancouver


Bill Gates, Microsoft’s newly appointed “technical advisor,” has reignited the perennial debate about whether Microsoft should sell off its Xbox and Bing businesses. For years, analysts and investors have leaned on Microsoft to sell off the loss-making Xbox division. With new CEO Satya Nadella saying that the company should focus on its core markets, rumors that Stephen Elop (now Microsoft’s hardware chief) wanted to sell off the Xbox and Bing businesses, and now Gates’ comments that he would “absolutely” support the CEO if he chose to sell off Xbox, is a sell-off or spin-off imminent?
In an interview with Fox Business, Gates answered some probing questions about the possibility of selling off the Xbox and Bing divisions — and while he didn’t come out as being for or against a potential spin-off, his responses strongly hinted that the company is continuing to ponder the possibility. Gates said he’s certain that “Satya and the team would look at that [selling the Xbox division] and it’s up to them.” If the new CEO chose do so, he would “absolutely” support him. Following Gates’ comments, Microsoft’s communications chief Frank Shaw was quick to tweet: “Bill’s comments re Xbox reflected support of Satya as CEO” — not support for a possible sale of the Xbox division.

The PS4 Rhombox, with controller
It probably doesn’t help matters that the PS4 is massively outselling the Xbox One

It’s also worth pointing out that Bill Gates said in the same interview: “We’re taking PC gaming, Windows gaming and Xbox gaming, and bringing those a lot closer together.” Here he is probably referring to the fact that Microsoft now uses a common Windows 8-based kernel across PC, smartphone, tablet, and game console — and if these platforms are intrinsically linked, he says a sell-off is “not as obvious as you might think.”

In short, then, while Gates might’ve spoken out of turn, we should try to keep his comments in context. We would also be wise to remember that, in a world where tweets and blog posts can be shared around the world while the truth is still putting its shoes on, one slightly misspoken word can lead to a whole lot of exciting, but ultimately superfluous debate.

Microsoft's three CEOs on stage: Bill Gates, Satya Nadella, Steve Ballmer

Microsoft’s three CEOs on stage: Bill Gates, Satya Nadella, Steve Ballmer

Still, when all’s said and done, it wouldn’t be all that surprising if Microsoft does actually sell off the Xbox and Bing divisions. Stephen Elop, when he was being considered for the Microsoft CEO position, was rumored to be considering a sell-off. Now that he’s head of Microsoft’s hardware efforts, which includes Surface, Xbox, and the newly acquired Nokia handset businesses, he’s in an ideal position to champion such a spin-off. As we’ve already covered, I doubt Microsoft cherishes the idea of inheriting a barely profitable 250-million-handsets-a-year businesses. It will be years (if ever) before the handset business is integrated with the rest of Microsoft’s efforts.

Outwardly, as far as we can tell, Microsoft still fancies itself as a products-and-services company. In a perfect world, I’m sure Microsoft sees itself selling smartphones, tablets, and game consoles that are perfectly married to Windows, Office, Bing, and Azure. It isn’t going to happen quickly, though, and I’m sure there’ll be a lot of pressure to sell off its non-core (i.e. non-software) businesses over the next few years.

Xbox One to be the first game console sold in China in 14 years, but the PS4 still rules the West

Xbox One available in China (China unbans game consoles)

In an exciting move that could turn the console war back in its favor, Microsoft’s Xbox One will soon become the first major game console to be sold in China since they were banned in 2000 due to concerns they melt the brains of children. The ban was finally lifted at the end of 2013, but only for consoles produced in Shanghai’s new free trade zone. It would seem that Microsoft is the first big console maker to set up a production line there, with a target domestic release date of September 2014. In case you weren’t aware, China is now the biggest market in the world for consumer electronics — if the Xbox One is a success there, it could easily pull ahead of Sony’s PlayStation 4


Way back in 2000, China banned game consoles that weren’t made by Chinese companies (i.e. all of the major ones). This was ostensibly due to a parental outcry over consoles affecting the productivity and mental well-being of their kids. In reality, it was probably just to keep foreign countries (and culture) out, which China is rather fond of doing. In September 2013, Microsoft announced that it had invested $240 million in a joint venture with BesTV, a domestic technology company — and almost simultaneously China dropped the console ban. To be fair, China might still be concerned about the effects of game consoles on children, but it would seem those those fears, in a pinch, can be assuaged by oodles of money.

China is the most populous country in the world, the largest market for consumer electronics in the world (as of 2013/2014), and according to Microsoft’s Yusuf Mehdi there’s “over a half a billion gamers.” BesTV, explaining why it chose to partner with Microsoft, says that the Xbox One is the “most amazing family friendly entertainment product in the world.” All of these factors, in short, could result in millions and millions of additional Xbox One sales — and thus the closing of the widening gap between it and the PS4.

Despite being illegal, you can still buy gray market consoles in China [Image credit: Kotaku]
Despite being illegal, you can still buy gray market consoles in China [Image credit: Kotaku]

I think the reality of the situation is a little more complex, though, especially for Western gamers (i.e. the Xbox One’s core market). International consoles, despite being banned for the last 14 years, have always been available through the gray market. In fact, despite the ban, there are plenty of stores — in bright, public places — that sell consoles and games that have been imported from Hong Kong and Taiwan, where the ban doesn’t exist (or isn’t enforced). The domestic production of Xbox Ones will probably increase sales by some margin (gray market imports are expensive), but I don’t know if we’re talking about millions of additional units here.

The thornier issue is the matter of local, domestic games. With the Xbox One now being legally recognized in China, Microsoft and BesTV will be working hard to get local developers to produce games that are specifically targeted at the Chinese audience. This is obviously great for Chinese gamers, but it’s either neutral or negative for Western gamers depending on your point of view. The reason the PS4′s sales advantage is so significant is because it will govern which console is the primary development target for next-gen games: If the PS4 has 50% more Western users than the Xbox One, then the PS4 will get all of the best Western games. Even if China puts the Xbox One way ahead of the PS4 in terms of total sales, it won’t suddenly mean that Xbox One owners in the West get special treatment.

Microsoft’s backwards policies are hurting the Xbox One


Broken Xbox One

Since the Xbox One was revealed last May, Microsoft has made some major course corrections. The draconian always-online requirements were scrapped, and the execution of used game dealers was stayed, but a number of serious issues still plague the Xbox One. Worst of them all, Microsoft is demanding launch parity for indie ID@Xbox titles. Because of this short-sighted policy, many notable indie devs are steering clear of the Xbox One completely.

In an interview with our sister site IGN, Curve Studios’ Rob Clarke explains exactly why his company is developing titles for the Wii U and PS4, but has nothing currently slated for the Xbox One. In the ID@Xbox agreement, there sits an insidious requirement dubbed “the parity clause.” This clause requires that these indie games must not have been released previously for other consoles. For small indie developers already working with Sony and Nintendo, this clause effectively means that the Xbox One is persona non grata.


Meanwhile, Sony is knocking it out of the park in terms of developer relations. Fan favorites like Nidhogg, Spelunky, and Escape Goat 2 were among a list of a dozen indie games scheduled for release on the PS4. If Microsoft wants to cater to the indie scene like Sony is, the parity clause has got to go. In the end, it does little more than shorten the list of titles available for the Xbox One.
Unfortunately, the demand for launch parity isn’t the only problem facing the Xbox One. The $500 price tag is an albatross around the Xbox One’s neck, and the cost of the Kinect is largely to blame. To close the PS4 sales gap, Microsoft needs to drop the MSRP, and make the Kinect an optional purchase. $400 is the right price for the Xbox One right now, and anything more than that means giving up even more ground to Sony.

Project Morpheus

Sony’s Project Morpheus virtual reality headset could arrive as soon as next year
If Microsoft is serious about fixing the Xbox One and its flagging sales, there are also a few smaller tweaks that would help its case. Sony has been quite aggressive with its plans for virtual reality and game streaming, but Redmond has remained relatively quiet on the subject. Getting out in front of these major trends would help mend Microsoft’s damaged image, and portray the Xbox One as a platform with a bright future.

While we’re on the subject, Microsoft also needs to remove the paywall from video apps on its consoles, and release official PC drivers for the Xbox One controller. It’s surprising to see how Microsoft is still dragging its feet on even the most minor of issues, and it’s slowly squeezing the life out of its new console. The time for action is now, but all we’re hearing is deafening silence.


Virtually perfect: The past, present, and future of VR


Virtual Reality

Virtual reality is currently the hot new thing in the world of gaming. Facebook’s Oculus Rift and Sony’s Project Morpheus headsets consistently make headlines, and it’s well-known that Valve is actively developing its own VR implementation. If these technology giants actually execute on the promise of “VR that doesn’t suck,” virtual reality has the potential to invigorate the industry like nothing else before. However, the Rift and Morpheus have decades of VR failure to fight against. Even if the finished products are solid, can they convince the average consumer that virtual reality is worth the investment?

A truly compelling VR solution has been a long time coming, and we’ve seen countless attempts in the past. The first real efforts to produce a virtual reality were made in the middle of the 20th century, and there have been far too many prototypes and failed products to cover in a single article. Instead, I’ll be focusing on the more recent history of virtual reality, and how it transitioned from a crazy sci-fi concept to a tangible household item that can take us anywhere we want to go.

Virtual Boy

Nintendo Virtual Boy

When I first saw the Oculus Rift Kickstarter project in August of 2012, I immediately thought back to the mid-90s when Nintendo released the Virtual Boy. Designed by Gunpei Yokoi (creator of the Game Boy), this red monstrosity wasn’t quite a home console, but it wasn’t portable either. This oddball device only displayed monochromatic red images, supported a mere 14 titles in North America, and was abandoned less than a year after its initial launch.

The high asking price and strange hardware configuration surely contributed to the Virtual Boy’s failure, but ultimately it came down to the disappointing tech. Sure, it could produce 3D images, but the result looked markedly worse than existing SNES titles. The Virtual Boy also needed to be used sitting awkwardly still at a table, so extended play was completely impractical. No head-tracking here! The mass-production of the Virtual Boy was certainly a large step forward in consumer VR, but the lackluster implementation left a bad taste in everyone’s mouth, and led to the ousting of Yokoi from Nintendo.

VFX-1
Image credit: Trypode

Head-mounted displays of the ’90s and ’00s

Throughout the 1990s and 2000s, a number of different head-mounted displays (HMDs) were released to little fanfare by consumers. I-O Display Systems had its obnoxiously-named “i-Glasses,” VictorMaxx had a number of HMDs with names like “Stuntmaster” and “CyberMaxx,” and Forte Technologies had its “VFX” line. By and large, these HMDs were complete failures. The underlying tech just wasn’t good enough yet, and the large price tags prevented even the slightest bit of traction from forming.
Among these headsets, the resolution and overall performance were usually underwhelming, but the worst part was how half-hearted so many implementations were. Some models didn’t feature motion tracking, and others lacked a stereoscopic 3D display. Given the technological restraints of the era, it’s easy to see why these VR headsets weren’t quite right. Even so, these companies were still selling helmets for hundreds of dollars a pop, so let’s not let them off the hook completely. Anyone snookered into buying one of these devices was most certainly disappointed in the modest hardware and severe lack of content.


TrackIR

Unlike the HMDs, TrackIR does one thing, and it does it well. This little sensor sits on top of your monitor, tracks the movement of your head, and translates that motion data into camera movements in first-person games. In very complex games like flight simulators, it’s important that the camera quickly moves exactly where the player needs to look, so the TrackIR’s head-tracking is a perfect fit.

TrackIR
TrackIR has been around for well over a decade now, and it has seen six major revisions in that time. While it’s been criticized for its high asking price in the past, the most recent model is available for only $150. It’s certainly not as impressive or complete as the Rift or Morpheus, but it’s a must-have for serious simulation fans.

Motion controls

When the Wii’s motion controls were announced, many pundits scoffed at the idea. As it turns out, the quirky use of accelerometers and IR triangulation was a hit. The Wii became one of the biggest selling consoles in history, and it inspired other companies to develop their own motion controls. The PlayStation Move and Kinect were both responses to the success of the Wii, and continue to inform the latest generation of game consoles.

Playstation Move

While motion controls certainly don’t offer a complete VR experience, they do offer a lot of the same benefits when implemented properly. Maneuvering our bodies and limbs in a 3D space is something we’ve evolved to be quite good at, so harnessing intuitive motions helps add to the immersion. The less conscious thinking you have to do about controlling a game, the easier it is to become immersed completely in the virtual world. If you want the complete VR package, motion controls are a must.



Oculus Rift

For much of the past two years, the Oculus Rift has been trumpeted as the savior of virtual reality. The first development kit was a hit among game developers and VR enthusiasts, and the second iteration is even better. It supports full 1080p (960×1080 per eye), has a 100-degree field of vision, and refresh rate of 75Hz. Better yet, it combines a built-in gyroscope, accelerometer, and magnetometer inside the helmet with a position-tracking camera to capture as much motion data as possible. All of that works together to make a very compelling product, and anyone can order one for $350.

Oculus Rift

With the recent acquisition by Facebook, the future of the Rift is up in the air. While Valve and other developers seemed very positive about the Oculus Rift when it was a fun indie project, ownership by a cut-throat megacorp like Facebook could certainly sour developer relations. If I had to guess, the Facebook acquisition will spur even more competition in the market. Nobody wants to live in a dystopia where Facebook rules the realm of VR gaming, so expect even more Rift competitors to pop up in the next year.

Project Morpheus

Speaking of competition, let’s take a look at Sony’s Project Morpheus. This HMD prototype was publicly unveiled at the Game Developers Conference earlier this year, and the reaction was very positive. This prototype is roughly on par with the second Rift development kit — offering a 1080p display, a 90-degree field of view, accelerometers, gyroscopes, and positional tracking with the PlayStation Camera.

Project Morpheus

There’s no telling when this device will hit store shelves, nor can we guess the final MSRP. All we know at this point is that Sony believes virtual reality is important to the future of the PlayStation, and it has the technical chops to deliver an experience comparable to the Oculus Rift. Let’s just hope that this public preview of Project Morpheus puts pressure on Microsoft and Valve to innovate in this space as well.

Military

Military and industrial VR

Of course, video games aren’t the be-all and end-all for virtual reality. The uses are nigh-on endless, really. The military can use VR to safely train soldiers, pilots can use VR to learn to fly, and surgeons can use VR to practice precise incisions. In fact, all of those professions already use VR technology for training purposes, so your life may have already been saved thanks to VR training.
As the tech becomes better and cheaper, even more industries will benefit from VR training. Imagine letting your 16-year-old kid log hundreds of hours behind the virtual wheel before she gets on the road. Think about learning a trade from the comfort of your own home. Virtual reality has the potential to be an amazing tool in every industry — even sex work.

The long march to perfection

We’re still a long way away from the virtual reality from science fiction, but these last few years have given me hope that something along the lines of The Matrix is within our grasp. When you look at motion controls like the PlayStation Move, head mounted displays like the Rift, and the advances we’re making in neuroscience, real virtual reality actually seems plausible.
Sure, it’d be incredibly fun to play Doom 4 with an HMD and motion controls, but that’s small potatoes. Imagine revisiting historical events in the first person. Consider what it would be like to skydive into the grand canyon without any risk of injury. The possibilities are endless, and products like the Rift are just the beginning





AMD launches new Beema, Mullins SoCs: Higher performance at almost-low-enough TDPs


 Mullins CPU die shot crop

AMD’s Beema and Mullins, both of which officially launch today, are an iterative improvement to the low-power Kabini (notebook) and Temash (tablet) SoCs it shipped nearly a year ago. Normally, follow-up launches like these implement minor frequency boosts or offer slightly better power consumption. With Beema and Mullins, AMD is promising a great deal more than that. These two new cores (Mullins is the tablet chip, Beema is for laptops) offer vastly improved performance per watt.

Neither SoC is a major departure from the previous architecture. Cache structure, branch prediction, and instruction set compatibility is the same between Kabini/Temash and Beema/Mullins, and the new chips do not support HSA. What AMD has done instead is innovate around the edges. Turns out that if you do that right, you can still turn in some impressive gains, year-on-year.

Mullins CPU die shot, with labeled blocks

New features

Turbo Core: AMD’s Kabini and Temash could reduce their own clock speeds to save power but didn’t have a Turbo Mode for additional performance in single-threaded workloads. Beema and Mullins both add this capability to certain chips — Beema, the notebook processor, can burst up to 2.4GHz while Mullins, the tablet SoC, can ramp as high as 2.2GHz.

ARM TrustZone: Mullins and Beema are the first AMD processors to integrate a Cortex-A5 on-die for additional system security and management. TrustZone is analogous to Intel’s Trusted Computing technology — ARM’s own website says that the feature is analogous to the Intel standard. This is essentially a corporate or government-oriented feature; there doesn’t seem to be much consumer software that actually uses the TrustZone system.

Reduced leakage: AMD claims that it’s reduced leakage current loss by 19% in Mullins as compared to Kabini. This isn’t’ the same thing as reducing total power consumption, but it should still have a measurable impact. The on-board GPU has improved even more; Beema and Mullins have 38% reduced leakage compared to Kabini/Temash.
A number of additional improvements were made to reduce power consumption in other areas — the display controller now draws less power when using DisplayPort and low-power DDR optimizations allowed AMD to reduce memory controller power consumption by 600mV compared to standard DDR modules.

New power management: This ties into the Turbo Core feature but is distinct enough to deserve its own mention. According to AMD, Beema and Mullins will include the ability to directly measure the skin temperature of the laptop or tablet and will adjust frequency based on how warm the chassis is — not just according to their own Tmax. Because heat dissipates out to the chassis rather slowly, AMD can therefore run their cores at a higher frequency for a longer period of time. According to AMD, Tskin will be a user definable variable (it’s not clear how this capability will be exposed in software).


Performance and TDP

AMD wasn’t able to provide us with a notebook for testing, so our insight into performance is somewhat constrained. The company did provide a robust set of data for three benchmarks — 3DMark 11 (graphics), PCMark 8 Home (traditional workloads) and Basemark CL (GPGPU compute).
AMD’s published results show modest gains from Kabini to Beema and Temash to Mullins. Graphics workloads gain the most — Beema is up to 10% faster than Kabini in 3DMark 11 and the new A4-6210 is 2% faster than the A4-5000 in PCMark 8. The gains are strongest at the lower end; the E1 Micro-6200T is 24% faster in 3DMark 11 and 9% faster in PCMark 8 than the 3.9W Temash it replaces.

Beema vs. Kabini 

Temash vs. Mullins 

The real improvements Beema and Mullins offer are on the power consumption side. The A6-6310 APU offers roughly the same performance as the A6-5200, but in a 15W TDP envelope, not 25W. The A4-6210 replaces the A4-5000, keeps that chip’s 15W TDP, but offers a 1.8GHz clock speed and 600MHz GPU (up from 1.5GHz and 500MHz respectively). The tablet improvements are even more impressive; the old Temash family had one dual-core chip at 3.9W but the other entrants were all 8W chips. The new Mullins chips, in contrast, have a maximum TDP of 4-4.5W and substantially more CPU and GPU headroom.
The full range of Beema and Mullins processors are listed below:

Mainstream Beema SoC's 

AMD’s “Max” column for Beema is a bit misleading; it implies that every Beema SoC has a defined Turbo Core. That’s not actually the case — only the A6-6310 has a Turbo Core of 2.4GHz with a baseline clock of 2GHz. The A4-6210, E2-6110, and E1-6010 all operate at a steady clock of 1.8GHz, 1.5GHz, and 1.35GHz. It’s not clear why AMD didn’t enable the feature for these chips, but the 1.8GHz A4-6210 should still offer a significant improve over the older A4-5000 thanks to a higher clock on both CPU and GPU.

AMD Mullins SoCs 

All of the new Mullins cores have a Turbo Mode with base frequencies in the 1-1.4GHz range and maximum frequencies as shown.


Will this refresh stave off Intel’s massive tablet push?

Beema and Mullins still draw too much power to fight Intel’s Atom across the entire range of its market, but AMD has clearly focused on pushing the chip into that 4.5W space in order to enable fanless systems and tablet designs. A quick check of Newegg shows that Kabini did fairly well over the past 12 months; there are multiple systems packing various iterations of the processor. The company has virtually no presence in tablets; there’s a lonely MSI tablet with an A4-1200 in it as compared to 229 separate Atom SKUs.

With Intel openly advertising the fact that it’s shipping tablets at contra revenue (meaning below the total cost of manufacture) in order to build sales volume and market share, AMD runs the risk of being steamrollered by its larger, richer rival. The charts and graph below show AMD’s historic improvement in power management and gives us an intriguing look at what the company is going to implement in future processors.

Future products 

The purple bits are the most interesting, though none of them have timelines attached. AMD is apparently working on integrating its voltage regulator on-die, as Intel did with Haswell, and adapting inter-frame power gating to better manage power consumption in specific workloads. The idea of inter-frame gating is to use software to estimate the minimum power and frequency targets needed to deliver a workload within an acceptable amount of time, and then set those levels before processing begins. It’s another component of the “race to idle” concept.

Hopefully these improvements will help AMD gain traction in the tablet space, but it’s going to be facing an uphill fight on that front. The company has yet to announce a follow-up SoC for 2015 in this space — presumably we’ll see HSA integrated at that point, but it’s still not clear.




Sprint looks to acquire T-Mobile, but FCC blocks with new anti-hoarding ‘spectrum screen’


US spectrum allocation, cropped

On Monday, the Wall Street Journal reported that the FCC has decided on how it will adjust the so-called spectrum screen, a measure that prevents the anti-competitive gobbling up of spectrum. On Wednesday, Bloomberg reported Sprint is working on financing for a summer bid on T-Mobile. However, the changes will make it extremely difficult, if not impossible for Sprint to acquire any more spectrum outside of the two upcoming auctions.

Nearly two years ago, we noted that the FCC was reviewing the current valuation of spectrum. As part of this reevaluation, the FCC began considering how to update the spectrum screen. The spectrum screen is used by the FCC as a benchmark — when an acquisition of spectrum takes a single entity over a certain threshold, the spectrum screen necessitates further scrutiny of that transaction. Depending on how much it exceeds the screen, the FCC could block a deal based on the premise that undue concentration of spectrum hampers competition, which is against the best interests of the American people. This is often referred to as “the public interest.”

A tall cell tower

The landscape has changed dramatically since then. Verizon Wireless acquired AWS spectrum from the cable companies, which is allowing Verizon to boost the capacity of its network all over the country this year. AT&T made WCS usable and bought most of the WCS licenses (except Sprint’s in the Southeast and Texas) and AT&T will finally be able use the spectrum starting late this year. Sprint acquired Clearwire and all of its 2.5GHz licenses and leases, shut down the Nextel iDEN network operating on ESMR 800MHz, and began deploying LTE on both bands. T-Mobile acquired MetroPCS and began deploying enormous LTE channels on the combined company’s AWS licenses. Dish gained AWS-4 and won all the PCS H block licenses at auction, enabling it to make a huge splash as a new LTE-Advanced cellular network operator, should it choose to do so. AT&T acquired Leap Wireless International, ending Cricket’s existence as an independent wireless carrier. T-Mobile acquired Verizon’s Lower 700MHz A block spectrum, enabling it to finally start expanding into rural areas in a much more cost efficient manner.

All of these transactions have changed the balance of spectrum, and key to this is that aside from T-Mobile’s purchase of 700MHz spectrum earlier this year, virtually all of the spectrum involved in these transactions are high band and super high band spectrum. This has obviously factored strongly into the FCC’s decision, since it decided to not implement any weighting based on whether the spectrum is low, high, or super high band. FierceWireless reported that the FCC has removed the non-auctioned Upper 700MHz D block spectrum (20MHz) and the SMR spectrum (12.5MHz). In its place, most of the 2.5GHz band (around 100 MHz), the AWS-4 band (40MHz), and PCS H (10MHz) will be counted in the screen.

FCC panel

The upcoming proposed changes to the spectrum screen would assuredly make it very difficult for Sprint’s attempt to rouse a deal to acquire T-Mobile US, which is currently valued at around $24 billion. Bloomberg reported on Wednesday that Sprint has lined up financing with six banks and is currently in discussions on the structure of a deal. However, the FCC is unlikely to look very favorably on such a deal, which may be why it is structured specifically to discourage further consolidation in the industry. Interestingly, the FCC will be ignoring the updated screen for the upcoming AWS-3 and 600MHz band auctions. This would incentivize Sprint, AT&T, and Verizon to go after these upcoming auctions.

It’s not surprising that the FCC is pushing so hard to “encourage” auction participation, but since it is for the benefit of improving our public safety networks and improving competition in the commercial mobile industry, this is a good thing. Hopefully, the FCC will meet its marks to fund FirstNet (the organization responsible for deploying a new national LTE-based PPDR (public protection and disaster safety) network. But the FCC is playing with fire here. Sprint has mentioned its disinterest in the auctions in the past. However, if Sprint can’t acquire T-Mobile US, it will most likely go for these auctions. Which suits the FCC just fine.

Google’s Android Silver program could take on the iPhone and end the Nexus brand


Silver

Android has succeeded in taking over in most areas, but that’s largely thanks to the marketing efforts by Samsung and others that got the Google name and associated Android brand out there. Most users have never used one of the stock Nexus devices that run Google’s preferred version of Android. That may be all about to change with the roll-out of a new, secret Android Silver program, which would see Google essentially paying OEMs to ship devices with stock or near-stock versions of Android. This could be the beginning of a new era for more usable Android phones, but it may also be the end of the Nexus program as we know it.

Pundits have been calling on Google to focus on the customer experience for years now in order to better compete with the iPhone. According to the leaked information, that’s very much what Android Silver is going to be. Google will reportedly pay OEMs to limit the bloatware and modifications made to a small number of devices, as many as five at any given time, according to The Information. In return, Google will provide better software support for users and help promote these devices online and in carrier stores as a single brand — a brand that may replace the fabled, enthusiast-friendly Nexus line.

Google Play

A Silver device could be something produced exclusively for the program, or a new version of an existing device. This second scenario would be more like the current Google Play Edition line, which is likely to be merged into Silver as well. All the Silver devices will have very nearly stock software, perhaps with one or two OEM customizations Google deems fit — think Active Display from the Moto X, not arbitrary reskinning a la Samsung. This will ensure they get prompt updates each time a new version of the platform comes out, essentially proliferating the Nexus program beyond its original single device framework.

Something that is still an unknown in all this talk about Android Silver — what about development devices? Google uses the Nexus platform to build and test Android, so can they simply toss it out? There are times when you simply need an unlockable bootloader and full system images. Based on the information out there, it sounds like there may be room for Google to commission Android Silver devices like it does the Nexus right now. If that’s the case, there could still be a Google-preferred device that it sells online alongside all the others with an open bootloader and full system images. The variety of hardware configurations in Silver could also still leave room for $300-400 unlocked phones.

Nexus

Whatever form Android Silver eventually appears in, it will no doubt be sold to people as a more consistent “Google” experience for high-end Android devices. That’s certainly a good thing as most Android users have never owned a “stock” device, which tends to be a better experience. Behind the scenes, though, this move could be just as much about leveling the playing field to keep Samsung from gobbling up everyone else’s market share.

By agreeing to make an Android Silver phone, an OEM gets free software support and advertising from Google. If the consumer-facing results are as good as Mountain View is hoping, it will mean more sales for less of an investment. This is a way for Google to help OEMs without just handing over sacks of cash, which would raise the ire of market-leading Samsung. At the same time, it ties Android together in our collective consumer consciousness. The platform ceases to be a cacophony of TouchWiz, Sense, and Optimus — it settles down into a gentle tapestry of Android Silver, with a splash of TouchWiz here and a hint of Sense there. Android Silver could come to the US, Germany, and Japan as soon as early 2015.

Fastest Mobile Networks 2014 needs your help


Fastest Mobile Networks 2014

Everyone thinks they know which mobile network is the fastest, which is the most reliable, and which has the best coverage in their area. Chances are that you have some pretty good anecdotal evidence about the correct answer for each question, but the simple fact is that someone should do some real testing to figure out which of Cleveland’s mobile networks is the fastest and if Verizon actually is more reliable than Sprint in Philadelphia. That’s exactly what PCMag’s Fastest Mobile Networks does each year.

Our colleagues at PCMag organized Fastest Mobile Networks (FMN) in order to quantitatively test which of the nation’s major mobile networks were the fastest in each segment of the country, which was the most reliable, and which had the best coverage. It turns out that this sort of testing is a huge undertaking, but by getting six cars, filling each with eight LG G2 smartphones, and then sending it on a predetermined course around the country, it’s possible to get a good idea about the status of the nation’s mobile coverage. These routes cover 30 major cities and all the areas between them, with a huge amount of data collected along the way.

FMN 2014
The FMN car fleet is very good at what it does, but it can’t offer complete coverage of the country. That’s why the other component of Fastest Mobile Networks data is crowdsourced. Anyone in the US with a mobile device can load up Sensorly’s free app (available on both Android and iOS), take a speed test, and contribute their anonymized data to the test. There are other reasons to keep the app after you do the test, such as for coverage maps and data usage analysis, so you’ll be doing more than helping out the FMN team when you download it.

Fastest Mobile Networks 2014 runs from May 1 until the end of the month, during which time people will contribute their own data and the cars will be making their way across the country. You can follow the team’s progress at @pcmphones and get live updates from each driver. At the end of the testing period PCMag’s team will pool the data, choose each area’s best carrier, and then decide which of America’s mobile networks reigns supreme.

The final results will be freely available on PCMag.com — not hidden away in a white paper or bundled into an expensive report — so you’ll be able to see the outcome you contributed to and get a better idea of which network is the best in your part of the country.

Crysis 3 hacked to run at 8K, gives us a beautiful glimpse of gaming in the future


Crysis 3 at 8K

You know how your computer struggles to run Crysis 3 at 1920×1080 at very high detail? And you know how beautiful it looks, even at that humdrum resolution? Well, now an enthusiast called K-putt has used a hacked Crysis 3 executable to run the game at 8K — and I’m not being hyperbolic when I say that the resultant screenshots are probably the most beautiful examples of real-time computer-generated graphics that I’ve ever seen. Crysis 3 at 8K looks so good that you could be mistaken for thinking that these screenshots are hand-drawn concept art — but in fact they’re straight out of the engine. Don’t get your hopes up for 8K gaming any time soon, though: K-putt has a pretty beefy gaming rig, and yet it could only muster 2 fps when running at 8K resolution.
As you’re probably well aware, ever since Crysis was first released in 2007, it has been the benchmark for graphics card performance and beautiful real-time visuals. To this day, “does it run Crysis?” is still a pretty common meme in computer hardware circles. Crysis 3 isn’t quite as crippling as the first game — it runs at reasonable frame rates on most mid-to-high-end graphics cards — but it’s still one of the best ways to stress a graphics card, and it’s still one of most beautiful games on the market.

Crysis 3 at 8K, cityscape

Crysis 3 at 8K, cityscape. For full-size images, see the Flickr link below.
To get Crysis 3 up to 8K, K-putt had to use a program called OnTheFly that produces a hacked version of the main Crysis executable file. This new EXE enables lots of new functionality through the in-game console — cvars, in game engine speak — allowing him to pump the resolution up, change the draw distance, and make other changes on the fly. He also used SweetFX, a program that allows you to customize a game engine’s shader code, to tweak the output’s colors and contrast. The end result still looks a lot like Crysis 3, just a little more contrasty. It’s also worth pointing out that K-putt didn’t actually get up to 8K (7680×4320), instead settling for an ultra-wide-screen 8000×3333 and portrait 3750×5000 shots. (Read: Triple monitor madness: GTX Titan, GTX 680, and Radeon 7970 go head-to-head at 5760×1080.)

At 8000×3333, each frame consists of 26.6 million pixels. By comparison, a 1920×1080 frame is a measly 2 million pixels. The full-res screenshots uploaded by K-putt clock in at 24 megabytes. In short, rendering Crysis 3 at 8000×3333 at a playable frame rate requires 13 times the processing power of your high-end gaming rig. We’re not talking about a brace of GTX Titans here — we’re probably talking about 10 GTX Titans, all working in perfect synchrony, to deliver a decent frame rate. (In case you’re wondering, true 8K is 33 million pixels per frame, or 16.5 times more than 1080p).

Crysis at 8K, external landscape

Crysis at 8K, external landscape. See the Flickr link below for the full-res download.
This is why, on K-putt’s computer — a quad-core Haswell Xeon 1230v3 with an overclocked Radeon 7950 — he only gets around 2 frames per second. Speaking on Reddit, he says that he played at a normal resolution to navigate to the perfect screenshot location — and only then did he change to 8K resolution before taking the shot. There is a gallery of full-resolution screenshots on K-putt’s Flickr – and incidentally, he’s also done some similar work with games like Dark Souls 2 and Tomb Raider. (Read: 8K UHDTV: How do you send a 48Gbps TV signal over terrestrial airwaves?)

Because of the exponential scaling of display resolution, and the diminishing gains with each new generation of GPU, it will probably be a decade or longer before 8K gaming becomes mainstream reality. Even 4K (3840×2160), which clocks in at around 8 million pixels per frame (four times 1080p), is still a few years away from single-GPU reality. But hey, considering 4K monitors and TVs are just starting to hit sub-$1,000 price points, and 8K displays are still one-of-a-kind prototypes, I don’t think there’s any real rush. Oh, and let’s not forget that consoles won’t support 4K until at least the next generation (probably 5+ years away), and PC games aren’t likely to adopt greater-than-1080p resolutions much before consoles.

Zenimax threatens to sue Oculus VR for IP theft: Is Carmack a liability?


Oculus Headset

Now that Oculus has Mark Zuckerberg as a sugar daddy, it seems that ZeniMax wants in on some of that sweet Facebook cash. John Carmack’s former employer is claiming that Oculus VR is unfairly using its intelectual property, and it’s threatening to take action. John Carmack and the Oculus team quickly and publicly rebutted ZeniMax’s claims, but the legal situation is bound to get even stickier from here.
It seems that Oculus VR can’t go very long without drama these days. Carmack abandoned ZeniMax (and Id Software) in favor of Oculus VR late last year, and Facebook bought the fledgling virtual reality company for two billion dollars just over a month ago. The internet immediately began frothing at the idea of the social media empire encroaching into the world of VR, but all the ZeniMax legal team sees is an opportunity to get a piece of the pie.

Oculus Money
Image credit: metavariable

Over at The Wall Street Journal, it recently came to light that ZeniMax is targeting Oculus and Facebook because of some unnamed intelectual property that John Carmack supposedly “improperly shared” with the Oculus team. Blood is in the water, and ZeniMax smells it.
Of course, Carmack was quick to reply over Twitter. He specifically claims that nothing he’s ever worked on has been patented, and ZeniMax only owns the code he wrote under its employ. In a follow-up tweet, Carmack goes on to clarify that “Oculus uses zero lines of code that I wrote while under contract to ZeniMax.” He clearly thinks that ZeniMax is in the wrong, but that’s not much of a surprise considering his current position as CTO of Oculus VR.

Yesterday, an Oculus representative made an official statement to the press: “We are disappointed but not surprised by ZeniMax’s actions,” he said, “and we will prove that all of its claims are false.” This statement reiterates that none of ZeniMax’s code is used in the Oculus product, and Carmack didn’t use any of its intelectual property. Most damningly, Oculus points out that the complete source code of the Rift is available online, but ZeniMax has never once identified a single piece of pilfered technology.

It remains to be seen if ZeniMax’s claims have any legitimacy, but it leaves me wondering if Zuckerberg is suffering from buyer’s remorse right about now. Sure, John Carmack and the Oculus team are incredibly smart and talented, but the baggage that comes along with them might be a bit more than Facebook bargained for.

U-2 Cold War spy plane causes air traffic control to crash, grounds hundreds of planes


U-2 Dragon Lady, in flight


Last week, a 1950s Cold War spy plane — the Lockheed U-2 Dragon Lady — caused hundreds of planes across the US to be grounded for an hour, and delaying hundreds more that were already airborne. The U-2, which was just minding its own business at an altitude of 60,000 feet above southern California, triggered a software bug that caused the FAA’s air traffic control system to “overload” and shut down. The backup system also failed, presumably for similar reasons. The beautiful irony is that both the plane and the air traffic control software were created by Lockheed. How did an ancient plane that has been trawling the skies for almost 60 years cause such a catastrophic failure?

The U-2 Dragon Lady, so called after the plane’s original CIA code name, was originally conceived in the early ’50s as an ultra-high altitude reconnaissance aircraft that could evade the Soviet Union’s air defenses. It was believed back then that Soviet radar maxed out at an altitude of 65,000 feet (12 miles, 20 kilometers) — and so the U-2 was designed to have a service altitude of 70,000 feet or higher. Much to the US and UK’s dismay, it would eventually come to light that the Soviets had improved their radar considerably, and could reliably detect overflights made by the U-2. By 1957, after just a couple of years of operation, the single-seat, single-engine U-2 had enjoyed unprecedented success as a spy plane, photographing around 15% of the Soviet Union. It wouldn’t be until 1960 that the Soviets finally managed to shoot a U-2 out of the sky using an SA-2 surface-to-air missile. (The pilot, Francis Powers, survived — but so did much of the plane, rewarding the Soviets with lots of tasty intel.)

An old-school U2 with its various loadouts

An old-school U2 with its various loadouts
As far as armament goes, the original U-2 was mostly outfitted with a massive camera — and I really mean massive: An 180-inch (4,500mm)  f/13.85 lens with 13-inch (33cm) square format photographic film. From an altitude of 65,000 feet, the camera/film could resolve details as small as 2.5 feet (76cm). The U-2′s image quality and resolution have only really been surpassed by orbiting satellites in the last couple of decades. The modern-day U-2S can be outfitted with a range of digital sensors, allowing it to be used in both war (it saw active duty in Afghanistan and Iraq) and civilian/domestic settings. While satellites are now the reconnaissance tool of choice, airplanes like the U-2 are very useful if you need a spotter in the sky quickly and can’t wait for a satellite to maneuver. Eventually the U-2 will probably be retired completely in favor of unmanned drones like the RQ-4 Global Hawk. (Read: DARPA shows off 1.8-gigapixel surveillance drone, can spot a terrorist from 20,000 feet.)

The view from 70,000 feet out the window of a U-2 is quite spectacular

The view from 70,000 feet out the window of a U-2 is quite spectacular
On April 30, it would seem a U-2 was just casually going about its (military) business high above southern California. Usually aircraft above 60,000 feet (FL600) are not required to communicate with air traffic control, instead operating under “visual flight rules” (they have to look out the window and make sure they don’t hit anyone else). For some reason, the air traffic control software at Los

Angeles’ Area Control Center bugged out and thought the plane was actually flying at 10,000 feet. The software, Lockheed’s ERAM (En Route Automation Modernization) then overloaded while trying to plot a safe route through the hundreds of planes currently in the skies around California.
It isn’t known why ERAM decided to crash this time, but not the dozens of other times that the U-2 has been over US skies. It’s possible that the U-2′s transponder bugged out and responded with an erroneous altitude. Or maybe ERAM, just like every other piece of software in the world, has some bugs that need to be worked out. With this being a military operation, it’s entirely possible that something slightly more interesting or nefarious was afoot — but I doubt the USAF would tell us if the U-2 was trying out a new air traffic control jamming device…