Notizie, considerazioni e discussioni sugli enterprise systems (mainframe e non) in italiano
|Here we want propose some translated posts, or any interesting thing, from the blog, you request to translate…|
Microsoft move and castle Dell!
Here in Rome we say “nun te tieni ‘n cecio ‘n bocca!” Literally can be translated as “you aren’t able to keep a chickpeas in your mouth!” It means that someone isn’t able to keep neither a small secret and goes immediately to talk about. Even if is not a secret I want immediately talk about the news related to Microsoft and Dell. News that Repubblica.it give with a sensational title Microsoft takeover Dell for 24.2 billion with the founder and Silver Lake, but the more realistic Wall Steet Journal ask himself Why is Microsoft Investing in Dell? and Microsoft, with the formal announce speaking, comunicates its Statement on Loan to Support Dell Privatization. I start from Redmond colossus announcing “Microsoft has provided a $2 billion loan to the group that has proposed to take Dell private.“. Here private means that share are not available on the stock market but private owned. “Microsoft is committed to the long term success of the entire PC ecosystem and invests heavily in a variety of ways to build that ecosystem for the future. We’re in an industry that is constantly evolving. As always, we will continue to look for opportunities to support partners who are committed to innovating and driving business for their devices and services built on the Microsoft platform.”
According WSJ there are two reason for this move: “The first reason is clear: To keep the Windows ecosystem, under attack from all sides, both vibrant and large. But there’s another less-obvious reason, too: To help wedge Microsoft deeper into corporate computing systems.” Then WSJ summarize the “intriguing history of these investments. Case one is Nokia NOK1V.HE +2.09%, in which it made back in 2011. The goal then was to get the also-ran Windows Phone software onto Nokia devices. And along the way, Nokia Maps began powering Microsoft’s Bing search engine.”
Then ends saying that Dell servers and its infrastructural service offering will be the “Trojan Horse” into on-premise enterprise IT departments increasing Microsoft “presence in the enterprise, which is rapidly shifting to adopt cloud services like Salesforce.com“.
With this shape Dell seems useful only for Microsof now that Wintel duoploy is under attack (you can read old Intel and Dell posts on the Almanacco 2012). Doubtless the Redmond enterprise used since ’90 used its cash to make investments trying to enter in strategic areas. My opinion is that such intention wa not supported by a clear vision or strategy. Reading back the Microsoft money flow investments we found in ’97 1B$ in Comcast Corp, a cable-provider sold out in 2009; in ’97 150M$ in Apple, to develop Mac Sw; in ’99 5B$ in AT&T, with the aim of getting its Windows software into set-top boxes, that never started; only 10M$ in Bungle Studio, the enterprise creator of the mega-hit shooter game Halo. Microsoft investments continue with a Korean telecom in 2001; the Great Plains and Navision acquisition, contributing with business Sw; investing 6.3B$ in the online advertising company aQuantive, worthless admitted in 2012; in 2008 500M$ in Danger acquisition that had should be allowed Microsoft entering in the phone market, but produced only in 2010 the Kin Phone, not a great success (only 3 years ago, do you remembre it?). Again investiong in Yahoo, Nokia, Skype e Nook.
Cables, telecom, videogames, online advertising, school books and now Dell. If it is present I do not understand the strategy.
Into the Enterprise market I see only another Sw giant acquiring an Hw producer, not so health, in order to propose bundles that bind customers to that platform. Not innovative products that, proposing platform flexibility, allow freedom of choice in Sw Stack definition.
UPDATE: Another interesting article on this subject Dell’s Privatization—A Move to Build a ‘Mini’ IBM? Don’t remember you a mainframeitalia post of the last year about Dell?
With 66% of sales from PCs, 19% from servers and the tablet competition Intel will soon have to choose
In a previous post I pointed out that chip manufacturers focus on markets, by volumes or specific, that ensure adequate margins for profits and to finance the research costs. I was ending with the conjecture that without a radical change Intel whould be in troublea. This will happen not from direct competitors like AMD, but by two factors: the first is due to the possible dissolution of the PC market, undermined by smartphones and tablets; the second, in my opinion, is due to the splitting of the chip’s market between small devices, undemanding of electricity and with multifunctional capabilities and devices for datacenters with large computing capacity.
In recent days there have been other articles (of people far more authoritative of myself) that have strengthened my view. Forbes in the post Otellini Nicely milked Intel’s PC-Chip Business, But is Justified Pay With Mobile Strategy to Flop? at the end of November shows that in 2011 Intel spent 19% of each sales dollar into R&D and 66% of its revenue comes from PC/Laptop and “the company derived 66% and 19% of its revenues, respectively, from the PC/laptop markets and servers in data centers“. While sales for tablets and mobile was only “an anemic 9%. And the division posted a year-to-date operating loss of $882 million“.
So if the PC/Laptop market fades in favor of the Tablet, in the absence of new decisions, my guess of dark days for Intel is not entirely unsubstantiated. Another bad feeling for Intel, is the end of the duopoly WinTel, after Microsoft gave substance to the development of Windows 8 for ARM.
EETimes in the article Why Intel should make chips for Apple, Cisco, has made an assumption: the company could became the chip supplier for Apple devices, or for Cisco high-end routers. The post author, Rick Merritt, seen in these two companies excellent potential for Intel: “both Apple and Cisco have the potential to be much larger customers of Intel than they are today“.
The question that I still have, seeing these numbers and considering these alternatives, is whether Intel will be able to continue the same level of research and development also suitable for the market of our datacenter? The pivot ability of Andy Grove will still be into the DNA of the new Intel CEO?
Batch is a dinosaur in danger of extinction or a shark that swims in the clouds?
Reading a presentaton about a scheduling software, I thought I were a specimen endangered. I am among the few people who remember, for having lived, the night shift in CED as it was: after the closure of IMS and all applications used for online activities, it was the time of shift change.
I worked in a computer room, square shaped, with a side of 60 meters (yes, you read right, it was only a “room” of 3600 square meters) and the table of operators, where there were about 15 terminals, emptied of afternoon shift people and arrived schedulers staff. Each took a seat behind a console, the two managers, which had already launched JOB to print night plans, collected the output and arranged them into many groups of sheets, began to distribute them. Those sheets were planning batch work and reported diagrams made of squares and diamonds each with inside a JOB name. Each scheduler started from the first sheet of his plan and began to run the first JOB or procedure indicated in the first square of the diagram.
In the other side of the computer room, there were other operators and tape units. Depending on the running JOB, they saw on their consoles, MOUNT messages indicating a request to mount a tape on a unit with a specific address. The operators then took the tape, with the label of the required name, from prepared carts and mounted it on the tape unit indicated in the message. When tape operations were finished, an UNMOUNT message showed operators to remove the tape and then it could be placed again in the cart.
Meanwhile, on the other side of the room, schedulers kept an eye on the console and verified the status of the JOB they initiated. If a JOB ended with a zero Condition Code, they marked on the sheet its box with a pencil and followed the branch of the diamond that says OK, if the JOB ended with a CC allowed by the diagram they followed operations were indicated on the sheet, and if the JOB terminated with any other condition they followed the path starting from the NO path of the diamond.
Sometimes problems required the systems engineering intervention: with their experience and knowledge, they found a way to continue the work and bypass problems. Schedulers, after fill it of pencil sign, gave back their sheet to the shift managers and took another one to continue their work.
Between shift managers, schedulers, operators and system engineering, from 50 to 60 people spent the night performing all the operations needed to complete the activities needed to complete the work done by agencies the day before and the operations prerequisites to restart applications online the following day.
In the mid-80s arrived Sw automation scheduling. The Operations Planning and Control (OPC) and then the Advanced version OPCA reduced night shifts to only 20-30 people. The Sw technicians (in Italy they were called proceduristi) that before, during the day, they just wrote the JCL for night JOBs, they were also responsible of the activities to build plans to be executed at night; so many operators stopped taking turns and were “promoted” Sw technicians. But if at the beginning it was just a convenience choice, very soon Sw schedulers become a necessary step to be able to handle the continuous increase of the batch number and the Batch window, that could only go from 20:00 to 7:00, from comfortable became more and more narrow.
If these scheduling software were limited in time to provide a timing and a calendar with which run only the JOBs provided by JCL , probably today would be similar to dinosaurs near to the extinction.
The presentation I read, speaks of the last release of the Tivoli Workload Scheduler (TWS), which is the current evolution of the OPC. In this presentation, it is clear that this software has expanded its capabilities by adapting to the changing needs of the batch environments.
From a mono-platform scheduling, it has become a reference for the planning of the Datacenter, thanks to its increased capacity along two dimensions: a first horizontal, which has allowed to include virtually all platforms present in a Datacenter has managed environments.
The second vertical, which extends the conditions used for planning and expands the types of tasks that can be scheduled.
The architectural innovations have introduced, those who, for simplicity, I call new batch-patterns. In the era described before I would say that there were only two batch-patterns:
1) An activity started at a certain time which, once concluded, makes available its output
2) An activity started after the completion of another task that, once completed, make available its output
Both patterns were a common feature typical of a batch: the recipient user is not waiting during processing. In online if the user is aperson, he is waiting behind the terminal or, if it is an application, it is halted waiting the result.
The last TWS is able to initiate activities also on the event basis and execute them on the best environment available at that time or, by interfacing with provisioning tools, to request the deployment of new environments to perform the tasks required in order to comply with Service Level needed. I could summarize that, thanks to these interfaces, the TWS has become like a metronome able to coordinate activities of cloud environments, both by providing to their batch needs and by using their flexibility to satisfy Service Levels required. To go deeper in this topic, you can download the original presentation from the link above provided and see all the details.
It is clear that what could seem a dinosaur, instead, is like shark that, although it is present from more than 400 million years, still swims in our waters after constant evolvment .
This blog, as its primary objective, is addressed to professionals working in mainframe environments and large data centers. It is a job I intend to continue to play in Italian, not for inability of who reads or writes (although on the latter many would find fault with good reasons), but for the communication speed and because I am best suited in the role of facilitator, instead of that of an expert.
But I cannot ignore the number of people who follow mainframeitalia or, its other second name enterprisesystemsitalia from abroad. Just take a look at the map on the lower left. There are also many followers, especially on Twitter, who are English speaking. Then I jumped at the opportunity offered by a new, to me, technology that allows me to group periodically thematic selected contents on the basis of mainframeitalia activities on this site and on other Social networks.
So today starts the publication of what is called The mainframeitalia.com Weekly mostly in English. Let me have your comments and in the coming weeks, I will consider whether or not to continue.