Pages

Thursday, May 29, 2008

Europe is not preparing to impose sales limits on Intel

BetaNews has confirmed this morning that a widely disseminated story first published by Financial Times Deutschland, which said the European Commission was preparing to issue sales restrictions on Intel, is false.

The FT story, which appears here in a poor translation from German, stated that the EC was preparing to issue a decision that would force Intel to refrain from giving preferred customers in Europe any kind of discounts. Allegations of Intel playing favorites with certain customers, especially for preferring Intel over AMD, has been central to AMD's worldwide antitrust battle with the market share leader.

Using adjectives he isn't normally known to use, EC spokesperson Jonathan Todd called the latest FTD report "irresponsible journalism," in a statement to the press this afternoon, Brussels time.

In a response to BetaNews' inquiry this morning, Todd reiterated, "There is no preliminary or internal decision on the case, and we have an active, ongoing antitrust investigation."

Intel corporate spokesperson Chuck Mulloy showed BetaNews this morning his initial statement on the matter, which began, "To the best of our knowledge, no decision has been made." Mulloy indicated that Intel's knowledge was apparently vindicated, and added that any and all reports of such an event should be treated as speculation.

To that end, he declined comment on our question regarding whether Intel predicts its market share would remain steady, were the EC to impose such measures.

TiVo partners with CinemaNow, gets Disney on demand

TiVo has announced that all broadband connected TiVo Series2 and Series3 users will gain Disney on-demand rentals later this year. This in the wake of a new partnership with online content distributor CinemaNow.

This partnership adds to TiVo's existing content deals with Amazon's Unbox, Music Choice, and Jaman, as well as upcoming support for YouTube and other popular Web video codecs. It looks to be part of a broader move to convert TiVo from a simple DVR into more of a media hub, offering an increased amount of on-demand content, along with access to user's personal media.

Though downloadable content through Amazon Unbox has been available for a while, it has thus far offered no content with HD picture quality or Dolby 5.1 audio. Today's Disney and CinemaNow announcement notes that though most of the releases later this year will be in standard definition, select rentals will be in HD. Like Amazon Unbox rentals, however, the content will only be available for 24 hours.

When weighed against the increasing number of dedicated on-demand set top boxes like those provided by Netflix, Apple, and Vudu. TiVo's claims of content volume are far ahead of competitors.

Will Verizon's FiOS TV in NY dent US cable monopolies?

Although fuller implementation of FiOS won't be easy, Verizon might soon be giving Time-Warner and Cablevision some real cable TV competition in New York City, now that a committee has given its okay to a sweeping franchise plan.

By June 30, 2014, all residents of all five boroughs of New York City will have access to FiOS cable TV, as a result of a vote taken yesterday by the city's Franchise and Concession Review Committee.

Right now, Verizon is only reaching about 20 percent of New York City households with FiOS, and most of those households are located in Manhattan or Staten Island. Furthermore, Verizon's current FiOS coverage in New York City includes Internet access coverage only, not cable TV.

Still, to make FiOS TV actually begin to happen in the Big Apple, Verizon must first manage to jump through the remaining hoops in the regulatory approval process. That means now obtaining final approval from both the Mayor's Office of the City of New York, and the New York State Public Service Commission.

Verizon contends that the deal will help to fight a monopolistic approach to cable TV pervading not just New York City but other parts of the US.

"After many years, real choice for TV is closer to reality for New York City residents," said Monica Azare, Verizon's senior VP for New York and Connecticut, in a statement. "When our proposal is fully approved, New York will be the first major city in the nation to break the cable TV monopoly and bring the network of the future to its residents today."

The extensive proposal also "includes, but is not limited to, provisions regarding consumer protection, public, educational, and government channels, and a schedule for deployment and service availability," according to a document posted on the committee's Web site.

Today, most New York City households have only one cable provider, even though two major cable companies operate in the city.

Time Warner covers Manhattan, Staten Island, Queens, and the western part of Brooklyn. Cablevision, on the other hand, services the Bronx and eastern Brooklyn.

If the proposed 12-year franchise deal goes through, however, Verizon will start offering FiOS TV in sections of all five boroughs by the close of 2008, Azare said. But still, under terms of the agreement, only Staten Island will be almost entirely FiOS TV-ready by year's end, with coverage of 98%.

Within this calendar year, coverage will be extended to 57% of Manhattan, but only to 15% of Queens, 13% of the Bronx, and 12% of Brooklyn.

Also, even if Verizon clears all of the approval hurdles, implementing the plan could be costly. If Verizon fails to meet its annual coverage goals, it will be charged fines of millions of dollars per year.

Unless 29% of all five boroughs is wired for FiOS by the end of this year, for example, Verizon will be fined $35 million. A total of 79% of New York City must be FiOS-ready by 2012, or the company will have to cough up $10 million.

Fines will keep being levied through 2014, when Verizon will be charged $1 million unless 100 percent of all five boroughs are covered by that time.

In any case, Verizon will also be required to pay the city $4 million for wiring municipal facilities, along with an annual franchise fee equivalent to 5% of Verizon New York's gross annual cable revenues.

Will Verizon's FiOS TV in NY dent US cable monopolies?

Although fuller implementation of FiOS won't be easy, Verizon might soon be giving Time-Warner and Cablevision some real cable TV competition in New York City, now that a committee has given its okay to a sweeping franchise plan.

By June 30, 2014, all residents of all five boroughs of New York City will have access to FiOS cable TV, as a result of a vote taken yesterday by the city's Franchise and Concession Review Committee.

Right now, Verizon is only reaching about 20 percent of New York City households with FiOS, and most of those households are located in Manhattan or Staten Island. Furthermore, Verizon's current FiOS coverage in New York City includes Internet access coverage only, not cable TV.

Still, to make FiOS TV actually begin to happen in the Big Apple, Verizon must first manage to jump through the remaining hoops in the regulatory approval process. That means now obtaining final approval from both the Mayor's Office of the City of New York, and the New York State Public Service Commission.

Verizon contends that the deal will help to fight a monopolistic approach to cable TV pervading not just New York City but other parts of the US.

"After many years, real choice for TV is closer to reality for New York City residents," said Monica Azare, Verizon's senior VP for New York and Connecticut, in a statement. "When our proposal is fully approved, New York will be the first major city in the nation to break the cable TV monopoly and bring the network of the future to its residents today."

The extensive proposal also "includes, but is not limited to, provisions regarding consumer protection, public, educational, and government channels, and a schedule for deployment and service availability," according to a document posted on the committee's Web site.

Today, most New York City households have only one cable provider, even though two major cable companies operate in the city.

Time Warner covers Manhattan, Staten Island, Queens, and the western part of Brooklyn. Cablevision, on the other hand, services the Bronx and eastern Brooklyn.

If the proposed 12-year franchise deal goes through, however, Verizon will start offering FiOS TV in sections of all five boroughs by the close of 2008, Azare said. But still, under terms of the agreement, only Staten Island will be almost entirely FiOS TV-ready by year's end, with coverage of 98%.

Within this calendar year, coverage will be extended to 57% of Manhattan, but only to 15% of Queens, 13% of the Bronx, and 12% of Brooklyn.

Also, even if Verizon clears all of the approval hurdles, implementing the plan could be costly. If Verizon fails to meet its annual coverage goals, it will be charged fines of millions of dollars per year.

Unless 29% of all five boroughs is wired for FiOS by the end of this year, for example, Verizon will be fined $35 million. A total of 79% of New York City must be FiOS-ready by 2012, or the company will have to cough up $10 million.

Fines will keep being levied through 2014, when Verizon will be charged $1 million unless 100 percent of all five boroughs are covered by that time.

In any case, Verizon will also be required to pay the city $4 million for wiring municipal facilities, along with an annual franchise fee equivalent to 5% of Verizon New York's gross annual cable revenues.

Blu-ray recorders doing well in Japan, players struggle

While Blu-ray recorders now outsell their standard DVD counterparts, research firm NPD reports that outside of the PS3, Blu-ray players are not selling well.

Japanese research firm BCN said that revenues from sales of Blu-ray recorders in that country increased more than threefold since January, when high definition players only comprised 12.4% of all sales -- and that figure included HD DVD hardware.

At least in Japan, the data shows that consumers did indeed hold back on purchases of high-definition equipment while the two formats duked it out for supremacy.

BCN says it expects sales of Blu-ray recorders to continue to rise as the Beijing Olympics near, and consumers wish to record the events in high definition.

While supporters of the format may be quick to point to the news as evidence it is moving forward, NPD says not so fast. In the period from January to February of this year, sales in the US decreased by some 40 percent, and only managed to crawl back up by two percent from February through March, according to a report from NPD analysts including director Ross Rubin.

These numbers reflect unit sales (as opposed to revenues) for stand-alone players (as opposed to game consoles or PC drives) in the US (as opposed to Japan or worldwide). Sony's saving grace though may be the Blu-ray enabled PS3, which continues to see increasing demand in light of better marketing and lower prices.

Even so, most analysts say it will be at least a year if not more before Blu-ray catches on with the average consumer. That's not stopping companies like Amazon from attempting to draw them in. Recently, the online retailer put about 116 titles on sale at savings of up to 50% off the list price. With the discounts, prices of discs are roughly the same as the standard DVD versions.

Windows Live services on Nokia S60 coming to the US

It was 2006 when Nokia and Microsoft first announced a partnership that would bring Microsoft's software to cell phones, and 2007 when it was announced a second time. At last, the results of their pairing will soon be appreciated in America.

A Microsoft spokesperson told BetaNews this morning that the company's Windows Live services for Nokia's Symbian 60-based cell phones, which is already available in 25 countries, will have that number expanded to 33 by the end of the day today, with the United States being one of those new countries.

A BetaNews check of Nokia's Web site for Windows Live services at 1:00 pm EDT this afternoon revealed the seven new countries -- which also include Hungary, Iceland, India, Israel, Poland, and Romania -- had not yet been added to Nokia's list. Nokia phone owners are being asked to download Windows Live services from Nokia's region-specific Downloads list.

An English-language instructions page sent by Nokia to its customers asks them to manually refresh their list of available downloads, and then download the main service from the updated list; it should appear as WinLive. Once that has been downloaded and active, customers can then download each Windows Live service individually, including Windows Live Messenger (not Windows Messenger), Windows Live Hotmail (not Windows Live Mail), Live Contacts, and the service's social network, Windows Live Spaces.

A page from Windows Live Contacts, one of Microsoft's services now being offered to Nokia S60 phone users in the US.  (Courtesy Microsoft)While services are initially free, a notice buried deep on Nokia's Web site today indicated that Windows Live Messenger users may, at some point, be notified that they have to pay a fee. This in light of the fact that customers will obviously be interested in using the network to bypass SMS text messaging, which continues to be a profit center for many carriers.

A complete list of compatible Nokia S60 handsets appears here. The company warns that N73 users must first upgrade to the latest version of the system firmware.

The effort to begin endowing Nokia phones with Microsoft's software began in September 2006, though both companies officially re-announced their partnership a full 11 months later. At that time, the first Nokia S60 phones with Windows Live services began cropping up throughout Europe.

Sprint joins an initiative to promote 4G 100 Gbps networks

Sprint Nextel and NetLogic Microsystems today announced they will offer their guidance to a new nonprofit organization to aid the development and adoption of new networking platforms from 40 Gbps products up to 100 Gbps and beyond.

The Road to 100G Alliance, first introduced at NXTcomm 2007, continues its aim of developing a set of standards for interoperability in the still chaotic world of high capacity data networks.

The Alliance was founded by Bay Microsystems, Lattice Semiconductor, Enigma Semiconductor, IDTTM, and IP Infusion.

Even though Sprint has been losing customers in its mobile phone service division, the company has most recently been promoting its Xohm WiMAX network that is scheduled to beat Verizon Wireless and AT&T to 4G. The company still has a long way to go to reclaim a leadership position among customers in the wireless industry, and hopes its efforts in the Alliance will help it regain momentum.

Sprint's participation in the Alliance will hopefully involve upgrading its current networks to at least 40 Gbps. Verizon Wireless has already upgraded some of its 10 Gbps networks up to 40 Gbps, including the network operating between New York City and Washington, DC.

In 2006, there were more than 229 million broadband subscribers online, with 60 million new users over a 12-month period when the statistics were collected. Research published on the Alliance Web site indicates there will be at least 350 million broadband subscribers in 2009, with almost 300 million of those using fiberoptic connections in 2010.

To help companies deal with such explosive growth, the Alliance hopes to foster an ecosystem able to more quickly adopt and deploy new systems for managing increased traffic. Contributing companies also help provide education and application support to Network OEMs and service providers who are helping roll out these networks to businesses and home users.

There are several other groups pushing for future standards, though disagreements and poor organization have caused them to temporarily falter. For example, the IEEE Higher Speed Study Group last year proposed the 100 Gbps speed as a benchmark for the future, but several members of that group disagreed, saying 40 Gbps should be ideal for the immediate future. The disagreement caused a temporary stall that later led some industry experts to conclude a 40 Gbps network scalable up to 100 Gbps may be preferable.

To help draw attention to the organization, Alliance members will host a panel discussion during the NXTcomm 2008 conference in mid-June, along with Interop New York in September.

Windows 7 multi-touch SDK being readied for PDC in October

As details continue to emerge about Microsoft's evidently well-made plans for its next operating system, we learn that full documentation for how multi-touch capabilities will work in Windows, will be ready for demonstration by this fall.

For Microsoft's next Professional Developers' Conference currently scheduled for late October in Los Angeles, the company plans to demonstrate the use of a system developers' kit for producing multi-touch applications for Windows 7. Such applications would follow the model unveiled yesterday by executives Bill Gates and Steve Ballmer at a Wall Street Journal technology conference in Carlsbad, California yesterday.

For the session tentatively entitled "Windows 7: Touch Computing," the PDC Web site -- which went live just this morning -- describes, "In Windows 7, innovative touch and gesture support will enable more direct and natural interaction in your applications. This session will highlight the new multi-touch gesture APIs and explain how you can leverage them in your applications."

We were surprised to find the PDC site reads better when viewed in Internet Explorer.

The early suggestions from Microsoft's developers -- some of whom have been openly hinting that multi-touch was coming to Windows 7 since last December -- is that the next version of Windows will be endowed with technology that emerged from the company's Surface project, its first to implement such controls. Surface is actually an extension of the Windows Vista platform -- specifically, it's the Windows Presentation Foundation extended so that it sees a surface display device as essentially just another container control, with an expanded list of supported graphic devices.

What is not known at this stage is how much today's Windows Vista will have to be extended to enable multi-touch in Windows 7, especially for the sake of downward compatibility with existing and earlier applications.

Prior to the advent of Windows XP, when applications were largely compiled using Microsoft Foundation Classes (MFC), application windows were very generic containers with standardized window gadgets and menu bars. When a developer used the standard MFC library, he could be assured that scroll bars could respond to mouse events and that contents that spilled off the edge of the visible area would not, as a result, descend into some invisible twilight zone.

Holding that MFC fabric together was the concept that graphic elements responded to individual events, often called "mouse events." And the basic premise of a mouse event was that it had to do with a single element positioned at one spot, or one set of coordinates, on the screen. A keyboard event could alternately trigger a mouse event (pressing Enter while the highlight was over "OK," for example), but the developer would only have to write one event handler for managing what happened after clicking on OK.

The first touch sensitivity in Windows came by way of Tablet PC, which was a platform extension to Windows, coupled with a series of drivers. Adding a stylus as a new device for input could indeed change the way applications worked unto themselves; they could add all kinds of new gadgets that would have been pointless under mouse control only.

In addition, Microsoft opened up a wide array of so-called semantic gestures, which was a library of simple things one could do with a stylus that could potentially mean something within an application. For example, scratching on top of a word could be taken to mean, "Delete this word." Drawing a long arrow beside a graphic object could mean, "Please move this object over here." It all depended on how the application developer wanted the user to see things; and there were certainly some good suggestions, but not the kind or level of standardization as prescribed by IBM's Common User Access model (PDF available here) of the early 1990s.

However, outside of the application's native context, whatever a stylus can do in the Windows workspace is relegated to substituting for a mouse event. In other words, the Windows desktop was not supposed to know or care whether the user was operating a mouse, a keyboard, or a stylus, just as long as the same events were triggered.

For instance, a tap of the stylus on the surface will send an event whose constant code in Visual Studio is WM_LBUTTONDOWN, followed immediately by WM_LBUTTONUP, as though the user had pressed and released the left mouse button (the "L" in these constant codes). By comparison, holding down the pen on the surface will trigger the WM_RBUTTONDOWN event just after the time the pen touches the surface, followed by WM_RBUTTONUP when the user lifts it from the surface. However Windows would normally respond to a left or right button click, respectively, is how the Tablet PC developer would expect Windows to respond to a stylus tap or a press-and-hold.

Here, because standard Windows functions must be capable of working reasonably within a Tablet PC environment, the interface between the general functions and the outside world is standardized.

Since that time, we've seen the advent of Windows Presentation Foundation, a little piece of which is distributed with every copy of Silverlight. An application built to support WPF operates under a new set of rules.

As we saw last year with the first demonstrations of Surface development, a gadget that can be used in a Surface application can essentially be the same gadget used in everyday Windows, just wrapped within a new and more versatile container. That container can then be assigned to the Surface container, which is an alternate space that doesn't have to abide by all the rules of the Windows desktop. There, most importantly, a gadget can be sensitive to more than one thing happening at a time; it can register something that takes place on multiple sets of screen coordinates (generally two) as a single event -- something which MFC could never do.

In the Surface world, as Microsoft's first demos showed, a gadget can be stretched and shrunk using two-handed or two-fingered gestures. It can be tossed around and spun, and depending on the level of physics in play at the time, gadgets can pretend to adhere to laws of gravity. This way a Surface display hanging on a wall, for instance, can contain gadgets which, when pinned, descend toward the floor rather than float as if in space.

These are the types of extensions made possible by WPF, and many of these same types of extensions were seen in the videos released yesterday by Microsoft, including windows that spin around -- something typical applications windows in Windows have never done before.

But as the Surface demo showed, the world inside Surface works essentially by registering itself within the underlying Windows kernel as a world within a world. It is an application, as far as Windows knows; and like a Tablet PC app that enables semantic gestures where the rest of Windows won't, a Surface demo is a world of enhanced physics, the likes of which have never been attempted on a Windows desktop.

So the question becomes this: What type of world is Windows 7? Will it adapt a Tablet PC-like model, where the real gist of the enhancements are available only to applications that are "multi-touch-aware?" Or can it open existing Windows applications to the realm of touch sensitivity? Put another way: Could today's Office 2007, running in Windows 7, allow its main application window to be stretched by two hands? Or will the types of functions we saw yesterday only be feasible to developers using the new Windows 7 multi-touch SDK, the existence of which was first confirmed this morning?

We may not know the answer to this next month, when Microsoft throws its TechEd conference in Orlando. But we know that we will know the answer by October; and we can infer from that news the fact that Windows 7 system developers' kits at a very low level will be distributable to developers this fall.

Windows 7 multi-touch SDK being readied for PDC in October

As details continue to emerge about Microsoft's evidently well-made plans for its next operating system, we learn that full documentation for how multi-touch capabilities will work in Windows, will be ready for demonstration by this fall.

For Microsoft's next Professional Developers' Conference currently scheduled for late October in Los Angeles, the company plans to demonstrate the use of a system developers' kit for producing multi-touch applications for Windows 7. Such applications would follow the model unveiled yesterday by executives Bill Gates and Steve Ballmer at a Wall Street Journal technology conference in Carlsbad, California yesterday.

For the session tentatively entitled "Windows 7: Touch Computing," the PDC Web site -- which went live just this morning -- describes, "In Windows 7, innovative touch and gesture support will enable more direct and natural interaction in your applications. This session will highlight the new multi-touch gesture APIs and explain how you can leverage them in your applications."

We were surprised to find the PDC site reads better when viewed in Internet Explorer.

The early suggestions from Microsoft's developers -- some of whom have been openly hinting that multi-touch was coming to Windows 7 since last December -- is that the next version of Windows will be endowed with technology that emerged from the company's Surface project, its first to implement such controls. Surface is actually an extension of the Windows Vista platform -- specifically, it's the Windows Presentation Foundation extended so that it sees a surface display device as essentially just another container control, with an expanded list of supported graphic devices.

What is not known at this stage is how much today's Windows Vista will have to be extended to enable multi-touch in Windows 7, especially for the sake of downward compatibility with existing and earlier applications.

Prior to the advent of Windows XP, when applications were largely compiled using Microsoft Foundation Classes (MFC), application windows were very generic containers with standardized window gadgets and menu bars. When a developer used the standard MFC library, he could be assured that scroll bars could respond to mouse events and that contents that spilled off the edge of the visible area would not, as a result, descend into some invisible twilight zone.

Holding that MFC fabric together was the concept that graphic elements responded to individual events, often called "mouse events." And the basic premise of a mouse event was that it had to do with a single element positioned at one spot, or one set of coordinates, on the screen. A keyboard event could alternately trigger a mouse event (pressing Enter while the highlight was over "OK," for example), but the developer would only have to write one event handler for managing what happened after clicking on OK.

The first touch sensitivity in Windows came by way of Tablet PC, which was a platform extension to Windows, coupled with a series of drivers. Adding a stylus as a new device for input could indeed change the way applications worked unto themselves; they could add all kinds of new gadgets that would have been pointless under mouse control only.

In addition, Microsoft opened up a wide array of so-called semantic gestures, which was a library of simple things one could do with a stylus that could potentially mean something within an application. For example, scratching on top of a word could be taken to mean, "Delete this word." Drawing a long arrow beside a graphic object could mean, "Please move this object over here." It all depended on how the application developer wanted the user to see things; and there were certainly some good suggestions, but not the kind or level of standardization as prescribed by IBM's Common User Access model (PDF available here) of the early 1990s.

However, outside of the application's native context, whatever a stylus can do in the Windows workspace is relegated to substituting for a mouse event. In other words, the Windows desktop was not supposed to know or care whether the user was operating a mouse, a keyboard, or a stylus, just as long as the same events were triggered.

For instance, a tap of the stylus on the surface will send an event whose constant code in Visual Studio is WM_LBUTTONDOWN, followed immediately by WM_LBUTTONUP, as though the user had pressed and released the left mouse button (the "L" in these constant codes). By comparison, holding down the pen on the surface will trigger the WM_RBUTTONDOWN event just after the time the pen touches the surface, followed by WM_RBUTTONUP when the user lifts it from the surface. However Windows would normally respond to a left or right button click, respectively, is how the Tablet PC developer would expect Windows to respond to a stylus tap or a press-and-hold.

Here, because standard Windows functions must be capable of working reasonably within a Tablet PC environment, the interface between the general functions and the outside world is standardized.

Since that time, we've seen the advent of Windows Presentation Foundation, a little piece of which is distributed with every copy of Silverlight. An application built to support WPF operates under a new set of rules.

As we saw last year with the first demonstrations of Surface development, a gadget that can be used in a Surface application can essentially be the same gadget used in everyday Windows, just wrapped within a new and more versatile container. That container can then be assigned to the Surface container, which is an alternate space that doesn't have to abide by all the rules of the Windows desktop. There, most importantly, a gadget can be sensitive to more than one thing happening at a time; it can register something that takes place on multiple sets of screen coordinates (generally two) as a single event -- something which MFC could never do.

In the Surface world, as Microsoft's first demos showed, a gadget can be stretched and shrunk using two-handed or two-fingered gestures. It can be tossed around and spun, and depending on the level of physics in play at the time, gadgets can pretend to adhere to laws of gravity. This way a Surface display hanging on a wall, for instance, can contain gadgets which, when pinned, descend toward the floor rather than float as if in space.

These are the types of extensions made possible by WPF, and many of these same types of extensions were seen in the videos released yesterday by Microsoft, including windows that spin around -- something typical applications windows in Windows have never done before.

But as the Surface demo showed, the world inside Surface works essentially by registering itself within the underlying Windows kernel as a world within a world. It is an application, as far as Windows knows; and like a Tablet PC app that enables semantic gestures where the rest of Windows won't, a Surface demo is a world of enhanced physics, the likes of which have never been attempted on a Windows desktop.

So the question becomes this: What type of world is Windows 7? Will it adapt a Tablet PC-like model, where the real gist of the enhancements are available only to applications that are "multi-touch-aware?" Or can it open existing Windows applications to the realm of touch sensitivity? Put another way: Could today's Office 2007, running in Windows 7, allow its main application window to be stretched by two hands? Or will the types of functions we saw yesterday only be feasible to developers using the new Windows 7 multi-touch SDK, the existence of which was first confirmed this morning?

We may not know the answer to this next month, when Microsoft throws its TechEd conference in Orlando. But we know that we will know the answer by October; and we can infer from that news the fact that Windows 7 system developers' kits at a very low level will be distributable to developers this fall.

Sony partners with cable providers on digital cable ready TVs

The electronics maker said Tuesday that it will work with six major cable operators to include digital cable technology in its next-generation television sets.

With the new sets, consumers will no longer be required to use a set-top box in order to receive advanced services. Sony has penned an agreement with Comcast, Time Warner, Cox, Charter, Cablevision, and Bright House Networks, which collectively provide service to about 82 percent of cable-receiving households. The agreement is essentially a memorandum of understanding on how channel guide and digital program delivery technology will be rolled out to the consumer.

"We are very pleased with this announcement," CableLabs spokesperson Mike Schwartz told BetaNews.

The technology used is called tru2way (formerly known as OpenCable or OCAP), which first rolled out at CES in January. At the time, Sony was not listed among the partners, although Panasonic and LG showed off television models based on the technology.

Peripherally, the technology is related to CableCARD, and the company behind tru2way -- CableLabs -- is behind both. The difference here is that CableCARD is a removable unit for retail devices, whereas tru2way is the middleware that provides for the control of functionality.

The technology eliminates the need for an additional converter or receiver box, simplifying the cable installation process and allowing for the use of advanced features such as on-demand, DVR functions, and program guides.

Sony's move could very well solidify tru2way as the standard for non-set top deployments. While CableCARD was intended to do the same thing, it did not garner enough support from the industry to make it viable.

The potential losers from these deals are Motorola and Scientific Atlanta. The two companies earn a significant portion of their revenues from the sale of their set-top boxes to consumers.

But more importantly, the deal puts Sony on a competitive scale against companies such as Motorola, TiVo, and Macrovision, all of which have a stake in not only pushing switching technology to the nation's households, but in controlling the slate of programming piped through to their digital TVs.

HP's newest power conserving ProLiant crams two servers in each blade

The newly crowned server king hopes to continue its success over IBM and Dell with a new two-in-one server aimed for the cloud.

After seeing the growing demand for cloud computing, Hewlett-Packard has thrown its hat into the ring with the announcement of the company's first two-in-one blade server, touting power reduction by pairing two servers in each blade.

HP's new ProLiant BL2x220c G5 enclosure features higher server densities the company hopes will make it ideal for cloud computing, Web 2.0, and high-performance computing technologies. The new HP server can scale up to 128 servers, 1,024 CPU cores, and 2 TB of RAM in a 10U blade chassis using four enclosures.

The new ProLiant is able to use two Intel Xeon 5200 series dual core chips, or two Intel Xeon 5400 quad core chips, depending on a customer's budget. Starting price is set at $6,349, but it too can scale up: well past the $20,000 point.

HP's two-server-per-blade ProLiant BL2x220c G5HP's new line will directly compete with the IBM iDataPlex line of servers, which launched last month specifically for the Web 2.0 and cloud markets. Using Intel Xeon processors, the iDataPlex line features 1U or 2U servers with up to 84 servers per rack.

With more data anticipated to be stored in "the cloud" -- that ubiquitous storage space in the Internet being parceled out to customers gigabyte-at-a-time -- someone has to be the cloud. The demand for higher speed plus greater storage on the server side is the catalyst behind HP's new "Scalable Computing and Infrastructure" business division, with scale-out hardware its main focus. Even though smaller companies have utilized blade servers in the past, the Scalable Computing and Infrastructure division will focus mainly on companies that have hundreds or thousands of servers in their farms.

HP is widely perceived as holding its lead over Dell as the global leader in PC sales in Q1 2008, and is now focused on keeping its top spot of server king after recently beating out IBM. Gartner industry numbers indicate HP now has 29.6% of the server market while IBM has a very close 28.9%, and Dell may be on the comeback trail with 12.1%

The ProLiant BL2x200c G5 will be the first of multiple new servers designed specifically for Web 2.0 and cloud computing features, although HP has not announced launch plans for similar server products.

HP's newest power conserving ProLiant crams two servers in each blade

The newly crowned server king hopes to continue its success over IBM and Dell with a new two-in-one server aimed for the cloud.

After seeing the growing demand for cloud computing, Hewlett-Packard has thrown its hat into the ring with the announcement of the company's first two-in-one blade server, touting power reduction by pairing two servers in each blade.

HP's new ProLiant BL2x220c G5 enclosure features higher server densities the company hopes will make it ideal for cloud computing, Web 2.0, and high-performance computing technologies. The new HP server can scale up to 128 servers, 1,024 CPU cores, and 2 TB of RAM in a 10U blade chassis using four enclosures.

The new ProLiant is able to use two Intel Xeon 5200 series dual core chips, or two Intel Xeon 5400 quad core chips, depending on a customer's budget. Starting price is set at $6,349, but it too can scale up: well past the $20,000 point.

HP's two-server-per-blade ProLiant BL2x220c G5HP's new line will directly compete with the IBM iDataPlex line of servers, which launched last month specifically for the Web 2.0 and cloud markets. Using Intel Xeon processors, the iDataPlex line features 1U or 2U servers with up to 84 servers per rack.

With more data anticipated to be stored in "the cloud" -- that ubiquitous storage space in the Internet being parceled out to customers gigabyte-at-a-time -- someone has to be the cloud. The demand for higher speed plus greater storage on the server side is the catalyst behind HP's new "Scalable Computing and Infrastructure" business division, with scale-out hardware its main focus. Even though smaller companies have utilized blade servers in the past, the Scalable Computing and Infrastructure division will focus mainly on companies that have hundreds or thousands of servers in their farms.

HP is widely perceived as holding its lead over Dell as the global leader in PC sales in Q1 2008, and is now focused on keeping its top spot of server king after recently beating out IBM. Gartner industry numbers indicate HP now has 29.6% of the server market while IBM has a very close 28.9%, and Dell may be on the comeback trail with 12.1%

The ProLiant BL2x200c G5 will be the first of multiple new servers designed specifically for Web 2.0 and cloud computing features, although HP has not announced launch plans for similar server products.

Google opens access to its App Engine, plans more Web tools

At its I/O developer gathering in San Francisco this week, Google offered more details about Android and Google Web Toolkit, while also opening up access to "everyone" for its new Google App Engine hosted development environment.

Not at all surprisingly, Google is delivering at least two conference sessions specific to Android, a controversial open source platform aimed at helping developers to create mobile applications that will interoperate across handheld devices from multiple vendors.

But other conference fare includes, for example, "Rapid Development with Python, Django and Google App Engine," "Getting Started with Google App Engine -- On Your Mac," and "Extend the Reach of Your Google Apps Environment with Google APIs."

Rolled out in mid-April, Google App Engine is a cloud-based hosted development environment, designed to let developers build applications on the same infrastructure that fuels Google's own applications.

"The goal is to make it easy to get started with a new web app, and then [to] make it easy to scale when that app reaches the point where it's receiving significant traffic and has millions of users," maintained Paul McDonald, a Google product manager.

More than 150,000 developers have signed up for the App Engine over the past six weeks, with most of them landing on a waiting list. But Google announced yesterday that, starting today, a free preview edition of the engine will be available to everyone, with no wait required.

As of about 3:15 pm PDT today, the App Engine download site still told visitors that it was only available to the first 10,000 developers who signed up for the software.

"[But] sign-ups [were] OPEN to all as of this morning (PT)," a Google spokesperson told BetaNews. "The language you saw on the site is old and will be updated shortly. You should be able to sign up [already] despite the language."

The preview period is expected to end later this year, according to Google officials. Until then, developers are required to stick to a "free quota" consisting of 500 MB of storage, and enough bandwidth to support around 5 million pageviews per month.

Google has also announced pricing for after the free preview period ends. Pricing will be based on CPU core-hour, outgoing bandwidth, incoming bandwidth, and monthly storage capacity.

But as McDonald wrote in his blog in mid-April, the preview release "is by no means feature-complete." Google is still seeking feedback from developers on what to include in the final release, BetaNews was told.

In the weeks ahead, Google plans to release new image-manipulation and memory cache APIs for Google App Engine.

In addition, Google is now targeting later this week for the availability of Release Candidate 1.5 of the Google Web Toolkit, a Java-based environment that enables developers to write applications in Java that are compiled into JavaScript and deployed through AJAX.

Google Health is one recently introduced application built with Web Toolkit. As previously reported in BetaNews, Google CEO Eric Schmidt told medical practitioners at a health care conference in March that third-party developers are also creating specialized applications for Google Health, a Web-based "cloud" initiative envisioned as storing consumers' health information online.

New features in Toolkit 1.5 will include Java 5 language support, a faster compiler, and more software libraries to help with building AJAX apps.

Special conference sessions are also on the agenda for YouTube and Google Maps development.

Google's OpenSocial API sandbox, just rolled out last week, is also sure to be a topic of conversation at Google I/O this week. The sandbox is intended to guide developers through the process of building and distributing interactive gadgets.

Facebook, Google's competitor in the social networking space, yesterday countered Google's OpenSocial efforts by acknowledging that it will now convert its own social networking code base into an open source platform.

"The major social networks are competing for the attention of the developer community, as the moves by Facebook and Google clearly show," said Ri Pierce-Grove, an analyst at DataMonitor.

Google opens access to its App Engine, plans more Web tools

At its I/O developer gathering in San Francisco this week, Google offered more details about Android and Google Web Toolkit, while also opening up access to "everyone" for its new Google App Engine hosted development environment.

Not at all surprisingly, Google is delivering at least two conference sessions specific to Android, a controversial open source platform aimed at helping developers to create mobile applications that will interoperate across handheld devices from multiple vendors.

But other conference fare includes, for example, "Rapid Development with Python, Django and Google App Engine," "Getting Started with Google App Engine -- On Your Mac," and "Extend the Reach of Your Google Apps Environment with Google APIs."

Rolled out in mid-April, Google App Engine is a cloud-based hosted development environment, designed to let developers build applications on the same infrastructure that fuels Google's own applications.

"The goal is to make it easy to get started with a new web app, and then [to] make it easy to scale when that app reaches the point where it's receiving significant traffic and has millions of users," maintained Paul McDonald, a Google product manager.

More than 150,000 developers have signed up for the App Engine over the past six weeks, with most of them landing on a waiting list. But Google announced yesterday that, starting today, a free preview edition of the engine will be available to everyone, with no wait required.

As of about 3:15 pm PDT today, the App Engine download site still told visitors that it was only available to the first 10,000 developers who signed up for the software.

"[But] sign-ups [were] OPEN to all as of this morning (PT)," a Google spokesperson told BetaNews. "The language you saw on the site is old and will be updated shortly. You should be able to sign up [already] despite the language."

The preview period is expected to end later this year, according to Google officials. Until then, developers are required to stick to a "free quota" consisting of 500 MB of storage, and enough bandwidth to support around 5 million pageviews per month.

Google has also announced pricing for after the free preview period ends. Pricing will be based on CPU core-hour, outgoing bandwidth, incoming bandwidth, and monthly storage capacity.

But as McDonald wrote in his blog in mid-April, the preview release "is by no means feature-complete." Google is still seeking feedback from developers on what to include in the final release, BetaNews was told.

In the weeks ahead, Google plans to release new image-manipulation and memory cache APIs for Google App Engine.

In addition, Google is now targeting later this week for the availability of Release Candidate 1.5 of the Google Web Toolkit, a Java-based environment that enables developers to write applications in Java that are compiled into JavaScript and deployed through AJAX.

Google Health is one recently introduced application built with Web Toolkit. As previously reported in BetaNews, Google CEO Eric Schmidt told medical practitioners at a health care conference in March that third-party developers are also creating specialized applications for Google Health, a Web-based "cloud" initiative envisioned as storing consumers' health information online.

New features in Toolkit 1.5 will include Java 5 language support, a faster compiler, and more software libraries to help with building AJAX apps.

Special conference sessions are also on the agenda for YouTube and Google Maps development.

Google's OpenSocial API sandbox, just rolled out last week, is also sure to be a topic of conversation at Google I/O this week. The sandbox is intended to guide developers through the process of building and distributing interactive gadgets.

Facebook, Google's competitor in the social networking space, yesterday countered Google's OpenSocial efforts by acknowledging that it will now convert its own social networking code base into an open source platform.

"The major social networks are competing for the attention of the developer community, as the moves by Facebook and Google clearly show," said Ri Pierce-Grove, an analyst at DataMonitor.

Google Earth's 3D landscapes now available through browser plug-in

With a simple tweak to the JavaScript code that embeds a Google Maps control in a Web page, your site can now have a fully operational Google Earth control.

The three-dimensional, zooming and scaling 3D satellite views of Planet Earth have already become a fixture on TV and Internet news sites, giving viewers the most photo-realistic views of the world's hot points like Iraq, Afghanistan, and China. Now, Google's 3D maps are finding a new home along with most of Google's other popular tools: in the Web browser.

BetaNews initial tests this afternoon involved Firefox 3.0 Release Candidate 1 -- which may arguably not be the most stable platform for such a test. Right away, we noticed one small problem: After you've downloaded and launched the separate binary file (presently available only from Google's test page), then after you're given the option to restart the browser, Firefox 3.0 RC1 loses its list of currently open Web pages. Being able to reload suspended Web pages is a key feature of RC1.

Once we reloaded the Earth test page in Internet Explorer 7, we saw better results. Google Earth essentially works in the browser exactly as in the stand-alone application, with the left mouse button letting you grab the globe and shift it left or right, and the right mouse button letting you zoom in and out and rotate.

The writer's home state appears strangely glazed in the latest scan from Google Earth.

Since Google's map data comes from multiple sources (whose names are automatically revealed along the bottom edge whenever their images are visible), one of the strange side-effects you may notice is that the Earth really can look -- inadvertently, of course -- like a political map. It's almost as if someone came in from outer space with a gargantuan pastry brush and painted an egg glaze over Indiana.

Google Earth does enable true 3D simulations of some important landmarks, except for this one.

The new plug-in could make it easier for some to pass the time scouting through the world for famous landmarks. Not all landmarks are rendered in 3D yet (and with very important examples like this, one may well wonder why not), though topology is rendered in three dimensions which, coupled with the angle of the sun over hills and mountains, and tilted ever so slowly and smoothly, can create an astonishingly realistic effect.

As lead developer Paul Rademacher wrote for Google's "Lat Long" blog this morning, "If you already are one of the 150,000 Maps API sites, and now want to 3D-enable it, we've made that possible with just a single line of JavaScript: Just add the new G_SATELLITE_3D_MAP map type to your MapsAPI initialization code, and (for most common usages of Maps API) your site will 'automagically' support Google Earth via a button in the maps view, with all your existing 2D map code now functioning in 3D as well."

Wednesday, May 28, 2008

Leaked Screen Shots of Windows 7 Hit CrunchGear’s Inbox

16.jpg
If you’ve been waiting to see what Windows 7 will look like then you may want to head over to CrunchGear to check out a bevy of screen shots that hit our inbox earlier today. Of course, the release is a couple years out, but we’ve confirmed that this is what the current build of Windows 7 looks like. Coincidentally, Microsoft’s Steven Sinofsky was interviewed by CNET about Windows 7, but gave very little, if any, details on the subject. As the saying goes, though, a picture is worth a thousand words.

iPhone's reach expands into Nordic states

Swedish mobile firm TeliaSonera has struck a deal to bring the device to seven countries in the region later this year.

In addition to Sweden, the company has operations in Norway, Denmark, Finland, Lithuania, Latvia, and Estonia. No specific date has been set. Either way, it expands the device's reach to much of Europe.

The Cupertino company is also apparently in talks to bring the device to Holland. Reuters reported Tuesday that carrier Royal KPN NV is in talks with Apple to offer the device there.

Apple likely has hopes that the expanded presence on the continent will help drive sales. To date, Europe has lagged the US by far in iPhone sales. Analysts believe the reason has much to do with price.

In Europe, phone subsidies are usually quite common. With the unsubsidized nature of the iPhone, many cellular customers there are balking at the high price of the phone compared to similarly featured -- and subsidized -- smart phones.

Elsewhere, there are still major markets left untapped by Apple, most notably Russia and China. While no carriers have appeared interested in Russia, China seems to be an altogether different story. Reports indicate that China Mobile appears most interested, but sticking points seem to be the revenue sharing requirements Apple places in its contracts.

With Apple apparently easing those requirements, as well as exclusivity clauses, China may see the iPhone in the not too distant future.

Dell found guilty in New York of misleading, harassing customers

Dell on Tuesday lost a major judgment in New York, in a case that centered around its financing practices for customers in which it was accused of defrauding and even harassing some.

The case was brought by the state's attorney-general Andrew Cuomo one year ago, and alleged that Dell failed to provide "zero-percent financing" to as much as 85% of the customers to whom that rate was promised, or who were otherwise entitled to such a rate. Dell then failed, the suit alleged, to provide customers with the customer support to which they were clearly entitled.

"Respondent Dell has engaged in repeated misleading, deceptive and unlawful business conduct, including false and deceptive advertising of financing promotions and the terms of warranties, fraudulent, misleading, and deceptive practices in credit financing and failure to provide warranty service and rebates," states this afternoon's ruling from Justice Joseph Teresi (PDF available here).

Justice Teresi went on to say certain petitioners were entitled to restitution, though the amount has yet to be determined.

According to a statement from A-G Cuomo's office this afternoon, Justice Teresi apparently agreed with evidence showing that Dell customers were not informed they could qualify for lower interest rates or better terms, and were instead charged as much as 20% interest on their purchases. Those who complained, Teresi found, were subjected to illegal harassment and false billing.

To help make their case, petitioners for the plaintiffs submitted several Dell advertisements. "The ads offer such promotions as free flat panel monitors, additional memory, significant rebates and instant discounts in very large point print in contrasting color," Justice Teresi wrote.

"They also include offers of very attractive financing, such as no interest and no payments for a specified period of time in prominent positions and similar large fonts and colors. While there is fine print below the financing offers limiting them to 'well qualified' customers, and after certain litigation, 'best qualified' customers, nothing in the ads indicate what standards are used to determine whether a customer is well qualified.

"There is also no indication of how many customers are likely actually to qualify," the ruling continues. "Petitioner's submissions indicate that as few as 7% of New York applicants qualified for some promotions. Petitioner has submitted several affidavits from consumers alleging that they saw these ads and were persuaded to call or access Dell's Internet site to shop for a computer because of the financing promotions. However, most applicants, if approved for credit, were offered very high interest rate revolving credit accounts ranging from approximately 16% up to almost 30% interest without the prominently advertised promotional interest deferral."

In his statement this afternoon, Cuomo said, "For too long at Dell the promise of customer service was a bait and switch that left thousands of people paying for essentially no service at all. We have won an important victory that will force Dell to live up to its responsibilities and pay back its customers for profits that were pocketed but not deserved. This decision sends an important message that all corporations will be held accountable for the promises they make to consumers."

This evening, Dell spokesperson Jess Blackburn issued this statement to BetaNews on behalf of the company: "We don't agree with this decision and will be defending our position vigorously. Our goal has been, and continues to be, to provide the best customer experience possible. We are confident that when the proceedings are finally completed the court will determine that only a relatively small number of customers have been affected."

Facebook confirms plans for open source platform

A Facebook spokesperson this afternoon confirmed rumors circulating all day long that the social networking site will turn its year-old developers' platform into an open source project.

Following on the heels of an announcement last week that Facebook is opening a new developers' sandbox, rumors of the impending open source initiative were first published late last night by blogger Michael Arrington in TechCrunch.

In response to an inquiry from BetaNews, a Facebook spokesperson sent a written response by e-mail late this afternoon.

"We're working on an open source initiative that is meant to help
application developers better understand Facebook Platform and more
easily build applications, whether it's by running their own test
servers, building tools, or optimizing their applications," according to the Facebook spokesperson.

"As Facebook Platform continues to mature, open-sourcing the infrastructure behind it is a natural step so developers can build richer social applications and share what they've learned with the ecosystem. Additional details will
be released soon."

Ri Pierce-Grove, an analyst at DataMonitor, told BetaNews this afternoon that Facebook's newly revealed plans for an open source initiative "represent Facebook's realization that its strength is in its development community as much as in its code."

The Facebook spokesperson didn't comment, though, on another question posed by BetaNews, about the relationship between Facebook's open source initiative and the sandbox announced last week as a way to familiarize developers with a new user profile design for the Facebook Platform -- now in progress for months, and delayed once already last month.

Meanwhile, also last week, Google unveiled a new OpenSocial API sandbox for guiding developers through the process of building and distributing interactive gadgets.

"The major social networks are competing for the attention of the developer community, as the moves by Facebook and Google clearly show," Pierce-Grove told BetaNews today.

Although Facebook has kept publicly silent about the open source initiative until today, the social network has been much more forthcoming about its plans for a new user profile for the Facebook Platform.

In a blog on the Facebook site, Facebook's Pete Bratach has been telling developers that the new profiles are now scheduled to launch in June.

In video provided to journalists today from a small "outdoor press event" last week, Facebook's Mark Slee said that the new user profiles are aimed at providing new "integration points" for third-party application developers, while also giving better profile control and ease of use to social networks. Developers will be able to create "custom tabs" for the tab-based UI, for example, he said.

But the Facebook developers made no specific mention -- in the video provided from last week's event -- of open sourcing.

Samsung to sample 256 GB solid state drive in late Q3

The Korean electronics maker showed off its biggest and fastest SSD in the 2.5" category, bringing its solid state hard drives ever closer to its HDDs in capacity.

Samples of the drive will go out to Samsung's clients in September, with release targeted for the end of the year. Also in development is a 1.8-inch version of the same drive, which is slated for fourth quarter 2008 availability.

Drive manufacturers have spent a lot of time recently on SSD. SanDisk was one of the first to come out with drives commercially in 32 and 64 GB capacities, but the small size and high price have limited adoption.

The technology is alluring, as it promises many advantages: First, power consumption is much lower, allowing for increased battery life in laptop deployments. Second, the lack of moving parts means a decreased risk of mechanical failure and an improved resistance to shock.

Possibly most attractive about SSDs is the allure of memory being technically faster than spinning platters -- a bone of contention for Samsung's and Toshiba's competition in the conventional HDD space, including Seagate. Samsung and several other companies this year have already introduced half-terabyte 2.5" HDDs, but solid state drives are gaining momentum.

Apple's MacBook Air uses a solid-state drive from Samsung, but it only is 64 GB in capacity and comes with a price tag ($3,098 USD) nearly twice of its HDD-based twin. Other laptops have come out with SSDs, but are similarly high priced.

Samsung's 256 GB drive measures in at 2.5 inches, and according to the company, will have a read speed of 200 MB/s, and writes at 160 MB/s. The question of whether that refers to sustained transfer rate is up in the air; conventional HDD manufacturers have been known to tout their interface transfer speeds (such as SATA II's 3 GB/s) as transfer rates, when expert builders know better. Power consumption comes in at just under a watt when in use.

With measurements of 9.5mm (.37 inch) in thickness, Samsung's drive could conceivably fit inside the MacBook Air, although the company has not specifically mentioned any applications for the drive yet.

Samsung's projected ramp up comes at a good time for the SSD industry in general: From now until 2012, the market is expected to see as much as 124% annual growth.

Samsung to sample 256 GB solid state drive in late Q3

The Korean electronics maker showed off its biggest and fastest SSD in the 2.5" category, bringing its solid state hard drives ever closer to its HDDs in capacity.

Samples of the drive will go out to Samsung's clients in September, with release targeted for the end of the year. Also in development is a 1.8-inch version of the same drive, which is slated for fourth quarter 2008 availability.

Drive manufacturers have spent a lot of time recently on SSD. SanDisk was one of the first to come out with drives commercially in 32 and 64 GB capacities, but the small size and high price have limited adoption.

The technology is alluring, as it promises many advantages: First, power consumption is much lower, allowing for increased battery life in laptop deployments. Second, the lack of moving parts means a decreased risk of mechanical failure and an improved resistance to shock.

Possibly most attractive about SSDs is the allure of memory being technically faster than spinning platters -- a bone of contention for Samsung's and Toshiba's competition in the conventional HDD space, including Seagate. Samsung and several other companies this year have already introduced half-terabyte 2.5" HDDs, but solid state drives are gaining momentum.

Apple's MacBook Air uses a solid-state drive from Samsung, but it only is 64 GB in capacity and comes with a price tag ($3,098 USD) nearly twice of its HDD-based twin. Other laptops have come out with SSDs, but are similarly high priced.

Samsung's 256 GB drive measures in at 2.5 inches, and according to the company, will have a read speed of 200 MB/s, and writes at 160 MB/s. The question of whether that refers to sustained transfer rate is up in the air; conventional HDD manufacturers have been known to tout their interface transfer speeds (such as SATA II's 3 GB/s) as transfer rates, when expert builders know better. Power consumption comes in at just under a watt when in use.

With measurements of 9.5mm (.37 inch) in thickness, Samsung's drive could conceivably fit inside the MacBook Air, although the company has not specifically mentioned any applications for the drive yet.

Samsung's projected ramp up comes at a good time for the SSD industry in general: From now until 2012, the market is expected to see as much as 124% annual growth.

Samsung to sample 256 GB solid state drive in late Q3

The Korean electronics maker showed off its biggest and fastest SSD in the 2.5" category, bringing its solid state hard drives ever closer to its HDDs in capacity.

Samples of the drive will go out to Samsung's clients in September, with release targeted for the end of the year. Also in development is a 1.8-inch version of the same drive, which is slated for fourth quarter 2008 availability.

Drive manufacturers have spent a lot of time recently on SSD. SanDisk was one of the first to come out with drives commercially in 32 and 64 GB capacities, but the small size and high price have limited adoption.

The technology is alluring, as it promises many advantages: First, power consumption is much lower, allowing for increased battery life in laptop deployments. Second, the lack of moving parts means a decreased risk of mechanical failure and an improved resistance to shock.

Possibly most attractive about SSDs is the allure of memory being technically faster than spinning platters -- a bone of contention for Samsung's and Toshiba's competition in the conventional HDD space, including Seagate. Samsung and several other companies this year have already introduced half-terabyte 2.5" HDDs, but solid state drives are gaining momentum.

Apple's MacBook Air uses a solid-state drive from Samsung, but it only is 64 GB in capacity and comes with a price tag ($3,098 USD) nearly twice of its HDD-based twin. Other laptops have come out with SSDs, but are similarly high priced.

Samsung's 256 GB drive measures in at 2.5 inches, and according to the company, will have a read speed of 200 MB/s, and writes at 160 MB/s. The question of whether that refers to sustained transfer rate is up in the air; conventional HDD manufacturers have been known to tout their interface transfer speeds (such as SATA II's 3 GB/s) as transfer rates, when expert builders know better. Power consumption comes in at just under a watt when in use.

With measurements of 9.5mm (.37 inch) in thickness, Samsung's drive could conceivably fit inside the MacBook Air, although the company has not specifically mentioned any applications for the drive yet.

Samsung's projected ramp up comes at a good time for the SSD industry in general: From now until 2012, the market is expected to see as much as 124% annual growth.

International payment bug leads to PayPal horror show

A simple bug with a drop-down menu on PayPal has been preventing international transactions for over twelve days, and users are understandably upset.

"One Time Purchases" between countries remains functional, however, when on the "Subscription Checkout Page," PayPal customers cannot choose their country when entering credit card information during checkout.

So whenever the user changes the entry in the drop-down box, the page refreshes itself but without changing the country information. It is impossible for buyers to complete a subscription transaction. Though it took a week for the site to acknowledge the bug, at 11:00 am today, PayPal finally "scheduled fixes to be rolled out to the live site."

There is no forecast for how long this will take, and merchants who rely on PayPal, such as Clicky Web analytics, are out of luck.

From the company's blog: "As a Web developer, I know how easy this problem is to fix. It's a drop down box. This problem could be fixed in a matter of minutes, so why has it taken them 10+ days? How many millions of dollars has this bug cost all of PayPal's customers? I would be so ashamed of myself if I was the one responsible for this bug, that I would quit immediately and go apply for a job flipping burgers, because that would be more my skill level."

Perhaps this is the risk using an Internet company that is not a bank to handle your payments.

BetaNews has first hand reports of dealing with PayPal regarding blocked payments from Liechtenstein -- one of the most business-dense countries in the industrialized world -- and support had no idea of the country's existence. One member of the support team suggested that it was part of another country.

Sprint says 5 GB per month should be enough for most

After last week's news that Sprint confirmed its plans to implement a 5 GB per month overall use cap for its mobile broadband service, the company has seen a flurry of negative comments, and last weekend attempted a clarification.

"The vast majority of our current users (about 99.5%) shouldn't be affected" by the usage cap, reads a statement to BetaNews from Sprint public relations manager Roni Singleton over the weekend. "Whether it's the 300 MB roaming limit or the 5 GB limit on total data usage, that's enough data to meet the regular monthly usage habits of almost all of our customers."

The company will check customers' broadband usage once every three months, and "customers would have to exceed the limit in two out of three consecutive months to face termination," Singleton told us. Starting June 8, customers will be able to monitor their data usage online, so that they are fully aware of the amount of data they've used.

Usage caps will only be placed on consumer and individual accounts, she continued, and not business contract, corporate, government, or public sector accounts. "We're working on additional processes and pricing to appropriately address the needs of heavy roaming and data users among the corporate liable customer group," she added.

A Sprint statement issued last week reads, "The use of voice and data roaming by a small minority of customers is generating a disproportionately large level of operating expense for the company. We are enforcing the existing terms and conditions for phone plans."

Previous Sprint customers should now be receiving messages attached to their phone bill stating the pending data usage caps that go into effect 30 days after receipt of a bill. Starting next month, Sprint employees will begin calling customers to confirm they are aware of the changes made to the mobile broadband plan.

Several blog posts prior to the Sprint announcement inaccurately claimed the cap was designed to force users to get ready for Sprint's WiMAX 4G network launch, currently slated for later this year.

Sprint says 5 GB per month should be enough for most

After last week's news that Sprint confirmed its plans to implement a 5 GB per month overall use cap for its mobile broadband service, the company has seen a flurry of negative comments, and last weekend attempted a clarification.

"The vast majority of our current users (about 99.5%) shouldn't be affected" by the usage cap, reads a statement to BetaNews from Sprint public relations manager Roni Singleton over the weekend. "Whether it's the 300 MB roaming limit or the 5 GB limit on total data usage, that's enough data to meet the regular monthly usage habits of almost all of our customers."

The company will check customers' broadband usage once every three months, and "customers would have to exceed the limit in two out of three consecutive months to face termination," Singleton told us. Starting June 8, customers will be able to monitor their data usage online, so that they are fully aware of the amount of data they've used.

Usage caps will only be placed on consumer and individual accounts, she continued, and not business contract, corporate, government, or public sector accounts. "We're working on additional processes and pricing to appropriately address the needs of heavy roaming and data users among the corporate liable customer group," she added.

A Sprint statement issued last week reads, "The use of voice and data roaming by a small minority of customers is generating a disproportionately large level of operating expense for the company. We are enforcing the existing terms and conditions for phone plans."

Previous Sprint customers should now be receiving messages attached to their phone bill stating the pending data usage caps that go into effect 30 days after receipt of a bill. Starting next month, Sprint employees will begin calling customers to confirm they are aware of the changes made to the mobile broadband plan.

Several blog posts prior to the Sprint announcement inaccurately claimed the cap was designed to force users to get ready for Sprint's WiMAX 4G network launch, currently slated for later this year.

Related Posts Plugin for WordPress, Blogger...