Category Archives: Computers

Major Website Security Flaw Makes Vunerable One-Third of Websites

Experts have reported a major flaw in the security software used by millions of Web sites, including those of banks, credit card companies, e-mail and social media services. The flaw, dubbed “Heartbleed,” makes possible the exposure of users’ names and passwords, the content of their communications, and their data to anyone who knows how to exploit the weakness.

It’s as if your front door wasn’t locked. Someone could get in as long as it’s not fixed. We should be clear that this does not mean that anyone has already gained entry or that any of your information has necessarily been stolen. What it does mean is that your information and sites are vulnerable to access, theft and disruption until such time as a fix is applied.

What can you do about it?

The problem is related to software installed on servers. Fixes are available and being implemented by most web service providers. We here at DataPlex are providing advisory services, helping other companies to secure their websites as quickly as possible. Let us know if we can help you.

Once a website has been fixed, it may still be necessary to replace security certificates used for secured communications and change user passwords. We will recommend the extent these steps are necessary after the fix has been applied and we have been able to examine the servers in question.

We hope every server on the Internet gets patched, but, alas, that is not typical. Some servers will remain vulnerable, and the only way to tell is to run a test, such as the test available at Go to that site and type in the URL of the website you intend to visit, e.g. (Google is safe, this is just an example). You should be concerned only about websites using https, also known as SSL and TSL or that are simply known as “secured websites.”

Our best, and we wish you a safe and productive web presence.

Personal Computing Comes Full Circle

Revisiting the Lessons of Timesharing

by Warren Juran

“Those who cannot remember the past are condemned to repeat it.”  George Santayana, might have been talking about how today’s Internet-based computing echoes computer timesharing from the 1960s and 1970s.  How can the lessons learned from that early era of computing help us to design better systems and applications today?  The tale of personal computing’s evolutionary path shows what experienced engineers and developers can bring to the design table.

Once upon a time, computers were so large and expensive that hardly anyone could afford to have one.  Computer users punched their program instructions and data into cards and brought the decks of cards to a computer center.  Operators at the computer center fed the cards into the computer’s input device.  The printed results were available after a considerable wait.  This inconvenient system seemed to take forever to debug software because of the delays after each cycle of correction and re-submission.  There wasn’t much ready-to-use software and the benefits of computing were available to very few.

Computer timesharing let more people enjoy the advantages of “personal computing.”  Hundreds of users could use their own keyboard/printer terminals to share the resources of one central computer.  In the 1960s and 1970s, computer timesharing companies spread around the world, operating large data centers and communications networks to provide dial-up services for their users.  Timesharing companies opened branch offices in large cities and employed armies of salesmen to locate and cultivate new customers.  The new customers could do their own interactive programming, or use extensive libraries of ready-to-use software for their computing needs.

The timesharing companies provided the central computers, communications networks, software libraries, customer support and education, printing services, remote job entry for traditional “batch” computing jobs, client data storage, and a one-stop-shop for word processing, accounting, messaging, engineering analysis and other customer requirements.

Developers of early timesharing systems dealt with issues like utilizing limited bandwidth, providing rapid access to large amounts of data, and insuring that individual users didn’t monopolize computer resources. Tools like “linked lists,” “sparse matrices,” “hashing,” and priority queuing helped improve timesharing systems. Today’s computers and communication networks are vastly more powerful than their early counterparts, but disciplines like Information Theory, Queuing Theory, Distributed Computing and Peer-to-Peer computing can still enhance system performance and reduce costs.

Cloud Computing Issues

This is a sidebar to our Article “Exploring Cloud Computing“.

Here is a rundown on most of the current issues concerning cloud computing:

Security – While a leading edge cloud services provider will employ data storage and transmission encryption, user authentication, and authorization (data access) practices, many people worry about the vulnerability of remote data to such criminals as hackers, thieves, and disgruntled employees. Cloud providers are enormously sensitive to this issue and apply substantial resources to mitigating concern.

Reliability – Some people worry also about whether a cloud service provider is financially stable and whether their data storage system is trustworthy. Most cloud providers attempt to mollify this concern by using redundant storage techniques, but it is still possible that a service could crash or go out of business, leaving users with limited or no access to their data. A diversification of providers can help alleviate this concern, albeit at a higher cost.

Ownership – Once data has been relegated to the cloud, some people worry that they could lose some or all of their rights or be unable to protect the rights of their customers. Many cloud providers are addressing this issue with well-crafted user-sided agreements. That said, users would be wise to seek advice from their favorite legal representative. Never use a provider who, in their terms of service, lays any kind of ownership claim over your data.

Data Backup – Cloud providers employ redundant servers and routine data backup processes, but some people worry about being able to control their own backups. Many providers are now offering data dumps onto media or allowing users to back up data through regular downloads.

Data Portability and Conversion – Some people are concerned that, should they wish to switch providers, they may have difficulty transferring data. Porting and converting data is highly dependent on the nature of the cloud provider’s data retrieval format, particular in cases where the format cannot be easily discovered. As service competition grows and open standards become established, the data portability issue will ease, and conversion processes will become available supporting the more popular cloud providers. Worst case, a cloud subscriber will have to pay for some custom data conversion.

Multiplatform Support – More an issue for IT departments using managed services is how the cloud-based service integrates across different platforms and operating systems, e.g. OS X, Windows, Linux and thin-clients. Usually, some customized adaption of the service takes care of any problem. Multiplatform support requirements will ease as more user interfaces become web-based.

Intellectual Property – A company invents something new and it uses cloud services as part of the invention. Is the invention still patentable? Does the cloud provider have any claim on the invention? Can they provide similar services to competitors? All good questions and answerable on a case-by-case basis.

Once someone understands that cloud computing potentially suffers from much of the same fate as proprietary systems, the question becomes “do the advantages of using the cloud outweigh my concerns?” For low-risk operations and for insensitive information, the answer can easily be “yes.” Realize that cloud-based services can be backed-up, verified, double-checked, and made more secure by combining them with traditional non-cloud IT processes.

The Different Types of Cloud Computing

This is a sidebar to our Article “Exploring Cloud Computing“.

Here is a list of the five most common types of cloud computing.

Software as a Service (SaaS) – a single application, library of applications, an API of web services, infrasructure or development platform users who are not necessarily aware of one another interact with through their browsers;, Google Apps and Zoho Apps are a few examples. Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) are closely related derivatives of SaaS.

Utility Computing – specialized apps coupled with dynamically reconfigurable resources with often a significant reliance on virtualization for ease of maintenance, portability and scalability.

Managed Services – piecemeal software extensions for existing IT departments such as virus scanners for email or remote desktop managers.

Service Commerce Platforms – a hybrid of SaaS and Managed Services presenting an automated service bureau. Think ADP.

Internet Integration – a combination of any or all of the above, from the same or different providers over a common “service bus,” today in its infancy. The “bus” is a standardized data transfer subsystem which allows different providers’ service elements to be plugged in and swapped out, allowing data to be shared across different providers and giving competitive choice to the user.

These services are provided by “cloud service providers,” also called “cloud vendors” or “cloud providers” for short. A “public cloud” provider is one who purveys services to pretty much anyone on the Internet. The largest public cloud provider in the world is’s Amazon Web Services. A “private cloud” is a proprietary network or a data center that supplies hosted services to a limited number of other organizations or people. When public cloud resources are used to create a private cloud, the result is called a “virtual private cloud.” Private or public, cloud computing provides easy, scalable access to computing resources and IT services.

Exploring Cloud Computing

Volume 4, Number 1

Could You Be Using It Someday?

We have entered an era of Things Cloud: “cloud storage,” “cloud computing,” or, just simply “the cloud,” referring to how IT personnel often represent the Internet in their diagrams. Are there opportunities to save money or get improved processes by moving to the cloud? In our analysis, we find that the answer is a qualified “probably so.”

The Cloud

Most business have already encounter the first embodiment of the cloud, “cloud storage,” also know widely as “online storage,” where data is kept not on your local computer but “somewhere” on the Internet, often accessed through a web portal that serves as a user interface for storage and retrieval. Flikr, Gmail, Facebook, and Remote Backup are examples of large implementations of cloud storage. While cloud storage has been around for a while, the cloud-based concept is in the process of evolving into not just providing data storage but operations on that data as well. We’ve entered the age of “cloud computing.”

Android-based Smartphones – Google’s Nexus One and Motorola’s Droid

Will Motorola’s Droid or Google’s Nexus One trump Apple’s iPhone?

The latest entries into the mobile computing market are the Motorola’s Droid and Google’s Nexus One, both based on Google’s powerful new Android 2.0 Operating System. Some reviewers have called these smartphones “iPhone killers.” Are they really? What does Android represent to mobile computing?

The Droid and Nexus One are both very capable devices, and they outperform the iPhone in several ways. Some if not most of their specifications indeed surpass those of the iPhone 3GS, Apple’s most recent offering, which, by the way, isn’t terribly surprising for two-year newer smartphone designs.

The Android devices tout a larger screen size, the ability to replace batteries, better voice control, application multitasking, turn-by-turn navigation like a standalone GPS device, and a less restrictive app marketplace. The iPhone has much more and better managed memory, seamless integration with its iTunes and app stores, a more protective app marketplace, a more fluid gesture-based interface, and a greater variety of more polished apps.

At DataPlex, we think of Android 2.o devices as different animals, less as direct competition for the iPhone and more as a gap-filler, particular for Verizon, the cellphone carrier that desperately needed a smartphone facelift. Many people will select Verizon smartphones because of their high-quality 3G network which is arguable better than that of AT&T, the iPhone’s exclusive carrier. Others will cite the lack of availability of add-on apps for the Android devices as compared to the enormous quantity and variety of apps available for the iPhone.

We see the Droid and Nexus One dropping into the space between the uber-business-focused Blackberry and the sleek, arty iPhone, and some new apps will just make more sense for the Android platform than they will on those for other smartphones. Android will help make more accessible business and enterprise applications.

Don’t feel bad for Apple. Apple has never sought purely to dominate a market. Rather, it looks to make its offerings attractive and easy-to-use, particularly with the overall intent of integrating them seamlessly with the rest of its product line. Alternately, the Droid and Nexus One come across as capable, feature rich devices, but ones with some rough edges and some complexity in the veins of the longstanding PC vs. Mac debate. Apple has its followers and the attraction of its more polished market. Rumor has it that Apple will releasing its next iPhone version mid-2010, that is, after it releases its also-rumored tablet. Don’t be surprised if it incorporates some of Android’s new features.

To learn more about the differences between the Droid, Nexus One and the iPhone, read the following posts. As you mull over what they say, you’ll identify with what is important to you.

The Wall Street Journal’s Walt Mossberg on his first impressions of Google’s Nexus One as compared to the iPhone:

GoGrid’s Technology Evangelist Michael Sheenan reports on a week he spent with the Droid:

Here, Technologist and TV Journalist Shelly Palmer provides a clear report card comparison of the iPhone, Droid and RIM Blackberry:

Ars Technica has posted a very complete and technical analysis of the Droid sprinkled with comparisons to the iPhone:

A bunch of pictures of the Nexus One:

Also, don’t forget that the Droid and Nexus One are only the first swath of Android 2.0 devices rolling out over the next several months, so be sure to watch for the latest in smartphone offerings. A good site to do that is:

Should you like any advice on your smartphone selection, feel free to drop me a note. Also, if you’d like to stay on top of things electronic from my perspective, you are invited to follow me on Twitter @DataPlexCEO.

Free Internet Node, Part 2 – Setting up the Internet Node PC

If the PC you have selected to be your “Internet Node” is not already hooked to the Internet, you should take the steps many other people have to get it permanently attached with a static IP address. Because of their reasonably high speed and low-cost, I recommend interfacing through either cable or DSL with either a cable modem or DSL modem, respectively.

I recommend Windows XP as your operating system since older versions of Windows simply do not have the reliability or the security, especially with the release of SP2 for Windows XP. Windows Vista is too new, and many experts suggest waiting until late 2008 before using a Windows Vista machine as a reliable server. The last thing you want is to have your Internet node compromised and shut down because a hacker used a vulnerability in your PC.

Some experts suggest running Internet services on Linux boxes (computers) which they consider more streamlined and less prone to attack, and that is okay if you have the experience and expertise. However, my purpose here is to show how to transform an “ugly duckling PC” into an “Internet node swan,” and Windows is what most PC users are familiar with.

I leave it up to you to decide if you need Windows XP Professional over XP Home. I use both, and they both seem to operate equivalently for our Internet purposes here. XP Pro is more tweakable and has additional features, but none of that is necessary for a typical low-bandwidth node.

The absolute best way to set up Windows XP is to restore the factory image of your PC harddrive from the Restore CDs that can with your PC. If you lean more towards being a PC expert, then you can clean up your PC, but please be thorough.

To clean up your PC, first remove any software you don’t want compromised, any financial spreadsheets, documents, etc. Then eliminate all extraneous software that you won’t be using through their Uninstall procedure or Add or Remove Programs in your PC’s Control Panel. Run ScanDisk, Defrag, and scan all of your drives with your antivirus software after making sure it’s virus list is up to date. You should also run anti-spyware software to make sure that your PC hasn’t already been compromised by hackers.

Once you have your Internet Node PC cleaned-up, you can verify your Internet speed to see if it is going to be able to handle the bandwidth you will require. A good rule of thumb is… If your bandwidth seems low, you might review some higher bandwidth services from your telephone or cable companies.

The next stop is to figure out what to do about your domain name, e.g. “”. If you already have a domain name, then you will inform your current registrar about your IP address change, so that they can change your DNS Record. (If they cannot because they do not provide that service, you may have to switch registrars.)

What’s all that about the DNS Record? Well, the global Internic Registry used for decoding domain names into IP addresses does not, as some might assume, contain a direct pointer to your IP address but instead a pointer to a DNS Record that has the ability to direct the different services and sub-domains of your domain to unique IP addresses. Typically, your registrar maintains your DNS record for you and haven’t had to deal with it until now. Live and learn! Some registrars allow you to create your own DNS Record but will also do it for you through their customer support. If you want to learn how DNS Records work, it is certainly educational from a “how does the Internet work” perspective, but not a requirement for our task at hand.

If you do not have a domain name, now is the time to get one. First, choose a domain name you like and that you think will be easy for others to remember and use. Then, sign-up your new domain using one of the registrars that are authorized to provide domain names. We recommend assign a domain name that ends with the top level domain “.com” unless you have grand reasons for using another (e..g. non-profits typically use “.org”). If your domain is already taken, the registrar will let you know and offer alternatives. You can go with one of those or iterate and pick another domain name to try and register.

There are many low-cost registrars that will sign-up a new domain name for under $15 a year, so don’t get suckered into paying $25 or even $35 a year. The last registrar I used was which charged me $11.90 for a one-year registrar of a “.com” domain including DNS service. Note if the registrar you use changes extra for DNS service (or even provides it – some registrars do not) as you will need it in order to have your registrar point your domain to your IP address. Normally, if they host your site at one of their own IP addresses which by the way is a huge profit margin for them – we are bypassing this typical configuration.

Okay, so you have registered or transferred your domain or changed your domain’s DNS Record to that you can . Now the bad news: It is going to take up to 48 hours for your change to propagate through the DNS System so, when someone, including yourself, types in “” in a browser, it is directed to your new IP address. Do you now have to wait for a day or two until you can access your services?

Checking and Setting Up the “hosts.” File

The answer is a definitive “no,” and you can keep going in setting up your Internet node even which your domain name addressing is in transition. First off, you can simply type in your Internet Node’s IP address and service port number to get to a service. But don’t do this yet as we haven’t installed and activated any services yet – we are saving that for the next installment.

Also, you can edit a file on one of your local network PCs that tells your PC to go locally, even to itself, when it sees particular domain names. This local redirection will work also on the Internet node PC you are setting up. This file is called the “hosts” file (actually the file name is exactly “hosts.” with a period and a blank extension) and is located somewhere in one of your Windows folders, somewhere in “c:\windows\…”.

You should use Window’s find feature to locate this file – search for the file “hosts.” starting in “c:\windows\” and include subfolders. On my Windows XP Home PC, the file is in my “c:\windows\system32\drivers\etc\” folder. Open it in Notepad or Wordpad. Wordpad is preferred since it remembers previously opened files in case you have to reedit.

There should be many lines starting with “#” that explain in a semi-obtuse way the use of the hosts file. Basically, all you need to do here is add a couple of line in the following format:

The IP address “″ is one that means “this PC” and reroutes the specific domains right back to the current PC without attempting to go out over the Internet and resolve the domain name, which, as we know, is in transition. So, at least for the next few days, we will be working with this redirection and you can continue to set up your Internet services and test them out. Then, once your domain’s URL is relocated, your Internet services will be all ready to go without delay.

If, in your hosts file, you already have a line: localhost

that is fine and you can leave that line there as well.

While we are on this topic, let’s do a little security review. Any other entries, lines that begin with an IP address (without the ‘#’) may be a hijack by a virus, spyware or hacker, so if you see anything that looks suspicious, you might want to check it out and delete it. For example,

is very likely a hacker intercepting the domain and redirecting it to a phishing site in order to steal your citibank online ID and passcode. If you see anything like this, your PC was at least at one point compromised and may still be, so rerun all of your antivirus and anti-spyware software first making sure you have the latest updates. Even better, use alternate software to evoke a wider gamut of detection.

Next Up: Free Internet Node, Part 3 – First Free Internet Service: A Website

Free Internet Node, Part 1 – Equipment and Location Evaluation

Okay, let’s get started. Remember, I am going to describe my actual approach and make more general comments to cover slightly different hardware and approaches. Please feel free to chime in by adding comments. (By the way, we have a few what-should-be-obvious rules for posting comments.)

The first step I took in setting up my own Internet node was to evaluate my potential equipment and locations. Depending on the amount of traffic I might expect, I would require different bandwidths and therefore would need to set up my equipment and choose my location accordingly.

Given my estimated low load of, say, under a hundred or so hits per hour, I decided to locate the node at my residence on its existing DSL line, noting that I could always scale upwards in case my traffic takes off or I add what become a popular website. I determined that I could get by with my existing 512 kilobits per second DSL modem and one of my old PCs as the Internet node. (DSL lines can actually go up to 1.5 megabits per second depending on your DSL service. You can test the speed of your current DSL connection.)

For reasons having to doing with noise and power consumption, I switched over later to a new eMachines mini-tower PC which cost me around $250 at Best Buy after all the rebates came back. Here’s another eMachines pricing sample along with some specifications directly from eMachines themselves.

Anyway, just make sure whatever PC you use is less than a couple of years old and, preferably, running XP Home or Pro. You can upgrade not too old PCs without any big hassles. Not to promote Windows XP too much, however you can optimize your security access, benefit from XP’s advanced features, and add hardware with minimal effort. (You can use older PCs running older versions of Windows such as Win98, but the end results will be sub par and your security will only be slightly better than nil.) The PC should have at least 20 gigabytes of available space, use an Intel-based processor running at least 2 GHz, and be very quiet if it is going to be within earshot. The latest version of an eMachines PC that I bought easily met this criteria.

What about Windows Vista? It may work well actually, but we did all of our work on Windows XP, and we haven’t found a compelling reason to upgrade. From what we hear, Windows Vista is a different animal when if comes to PC operating systems and there are incompatibilities with some software packages. All it would take is one small module not working to put a serious damper on things. Anyway, the incompatibilities with using Windows Vista may diminish greatly after Microsoft’s final release of Service Pack 1 (SP1) which is scheduled for early 2008 (a release candidate is available now). Anyway, I may post a Windows Vista report later since that is the operating system most easily accessible today. For now, downgrade to XP or just buy a used PC with Windows XP on eBay.

So, with my existing equipment of an older tower PC and a DSL line and modem all of which could already access the Internet, I encountered my first dilemma. The typical $30 DSL line to a residence (such as through SBC) requires a log in. Until your equipment logs in, you do not have an IP address. And should you have to log off (or be forced off) for any reason and then sign back in, you will have a different IP address.

Web servers, email servers and ftp servers are best located on a static IP address, a permanent address on the Internet that never changes. With the low cost residential DSL line with a dynamically allocated IP address, you would have to change some network parameters each time your IP address changes. (I am being vague about the “parameters” as that would be a digression, and I do not want to lose focus right now.) While that is possible, it is a technical pain, and, depending on the reliability of your DSL provider and your telephone lines, you might have to change the parameters several times a month. Also, your node will be down until your equipment logs back in and changes its parameters, maybe a good part of a day.

Fortunately, I found that I could upgrade to a static IP address for an extra $25 a month. I feel that paying such a small fee to not have to deal with changing IP addresses. If you disagree, you can explore the dynamic IP address route with either DSL or cable modems and let me know how it works out for you.

The change from a dynamically allocated to a statically allocated IP address took less than 48 hours. Since I use a LinkSys router, I did have to switch it over from PPPoE to Static IP Addressing. Note: This step is different for different routers but is typically achieved using your browser to access your router’s parameters at your local URL address

Thusly, I was able to evaluate my existing equipment and location and found both acceptable with the one change to switch over to a permanent IP address. I estimate this effort took about four hours of my time, most of which was dealing with the phone company and the router.

Next Up: Free Internet Node, Part 2 – Setting up the Internet Node PC