Wednesday, November 28, 2012

Why the cloud conflicts with your IT support team's interests

Heard of cloud computing? It's changing the landscape of IT management, deployment and costing in huge ways. It is enabling business to scale computing power instantly, pay for only what they use, and it allows developers to write software code on top of new kinds of technologies that don't even require interaction with the underlying operating systems.

Traditional IT experts will often use every excuse possible to avoid relinquishing control of the physical computing infrastructure. Amazon's Andy Jassy pointed out some great reasons why this happens, today during the opening Keynote of Amazon's Re:Invent convention.

I would like to point out that I am a traditional IT guy. I build systems, and implement infrastructure. I am also innovative and have enough foresight to realize that my breed is going to die off very soon.

I smiled and shook my head in agreement as Andy pointed out the current trendy argument against moving to the public cloud - that is, that if you have already invested in your own datacenter, or whatever, and you want to retain control of your stuff, then you should purchase a private cloud.

Here's why this is often not a good idea:

Private clouds do not scale. They claim to scale, but you can only ADD capacity, you cannot REDUCE capacity. In the public cloud, you can reduce capacity as your server workloads decrease, and you can add capacity as your applications need it. Pay for what you use, versus making capitol investments in what you might need.

Private clouds do not add capacity instantly. You purchase a fixed system, and if you hit your ceiling, you must buy more hardware. Time to deployment takes an average of 6 weeks per server for private clouds. Public clouds deploy your capacity, whether one, or a thousand servers, in seconds, with no upfront capitol cost.

Private clouds are not cost effective. You must still run your own datacenter, and purchase physical hardware, manage power, internet, data pipe redundancy or whatever you need to run your system.

Private clouds are complex to manage, expensive to maintain, and often do not solve the problems that people want to move into cloud computing to solve.

The other side of the argument came down to the system integrator's business models.

I'll say that I am a system integrator. I understand the following argument very well, and I know for certain that there is very little that my peers can do to avoid being destroyed by the cloud unless they fully embrace it, which may or may not be possible, depending on their business model.

The argument comes down to "product margin", and how your business model is affected by the margins of the products which are sold by that business.

I can say that servers are a fairly high margin item. Interestingly, it is one of the few computing hardware items which has high margin. Businesses who's model involves selling servers expect to make a handsome profit off each unit sold. Not so in the cloud. Andy Jassy pointed out, correctly, and with great industry examples, that businesses who's models involve high margin hardware, like servers, are unable to adapt to the low margin, high volume world of cloud computing. Their business models just aren't easily adaptable to the new world of cloud computing... unless your talking about private clouds. This is what powers their desire to research and sell you private cloud platforms.

So, it's true that your IT person, if they are a traditional IT company like most are, willl want to push you a private cloud solution. They will make a wide range of seemingly compelling arguments for the same. If they pushed the public cloud, their business would suffer in many cases, and they are unable to adapt because we are talking about a significant change in the way you operate and handle expenses, and growth.

Now, with all that said, there are legitimate uses for private cloud solutions. There are a handful of decent reasons for some companies to even avoid the public cloud. Those reasons are dwindling by the month however as the industry innovates new ways to create new implementation scenarios. Currently, all but the most highly regulated companies should be looking closely at public cloud solutions for at least part, if not all, of their systems.

The bottom line is this: If your IT person tries to sell you a private cloud, you should look at the reasons why they are selling you that. You should be suspicious, and do your research. You should get a second opinion from a company that specializes in public cloud infrastructure integration. Those companies, such as Node LLC, will typically be partnered with multiple public cloud providers, and should have a portfolio of customers who they have taken into the cloud.

Remember, if your IT group is not trying to implement at least some of your solutions on public cloud based systems, they are likely making their recommendations based on their own pocket books health, and less so on your business needs.

I am Brandon from Node. I am a system integrator with 20 years of industry experience building networks and computing platforms for small businesses. I can help you determine if a private cloud is viable for your business, and help you deploy either private or public cloud solutions, or traditional systems if desired or needed.I can also help you build hybrid cloud platforms, that work at the public and private layers.

Visit http://www.nodetx.com and contact me today to learn how Node can help you grow your business, inside, or outside, of the cloud.

Tuesday, June 19, 2012

Apple iMail crashes VPS websites

I ran across an interesting problem today. A website went down, and it was presenting me with this error message:


500 Internal Server Error


The server encountered an internal error or misconfiguration and was unable to complete your request.

Please contact the server administrator, webmaster@website.com and inform them of the time the error occurred, and anything you might have done that may have caused the error.

More information about this error may be available in the server error log.

Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.

Apache Server at www.website.com Port 80



Now this particular website is hosted on a Hostgator VPS server. I looked at the cPanel for the server and everything appeared good. The file structure and security was intact, and the databases were healthy. I checked the event logs in my cPanel and found this:

[Tue Jun 19 19:09:46 2012] [error] File does not exist: /home/username/public_html/500.shtml
[Tue Jun 19 19:09:46 2012] [error] SoftException in Application.cpp:574: Could not execute script "/home/username/public_html/index.php"

So I checked into that error and it appeared that I needed to contact the Hosting provider.

I got on chat with Hostgator and they informed me that the problem was my "process" usage was too high. Shared VPS systems limit the number of concurrent processes that a given account is permitted to run. The limit is 25 for this particular hostgator account.

They also said that e-mail processes were to blame.

I requested for them to kill the processes, and then the website came back online.

A few moments later it went back down.

This site only had 3 users with e-mail so I asked them to shut down their Apple iMail applications on their macbooks. They also had iphones and ipads syncing. All of this had been set up and working fine for months without changes.

While they all have Macbooks, only one of them was causing the issue. A single iMail application on one of the 3 Macbooks, when shut down, would instantly release the process overload on the Hostgator VPS server and the website would immediatly appear! The second you started iMail back up however, the website went down. iMail was strangling the processes on the Hostgator VPS via IMAP queries over the internet. Since the Hostgator VPS had both the e-mail, and the website, when the processes spiked to over 25, Hostgator's policies instantly cuts the VPS off from running additional processes.

This is a very strange problem. My recomendation is to use Hosted Exchange from Node. Check it out here.

Friday, April 27, 2012

Amazon Web Services (AWS) Virtual Private Connection (VPC)

A couple years ago, I started deploying little Linux web and utility servers to Amazon Web Services EC2. Like many providers today, EC2 lets me instantly turn on “virtual instances” in the cloud.
Usually when I deploy a new server, I bind a public IP address to it and then it provides some kind of cloud-based service or website from there.
I ran into a situation where I had to build a pair of servers in the cloud which needed to be networked. Only one of them needed to be exposed to the Internet, and I also had to build VPN tunnels from multiple physical locations in diverse geographical locations, directly to this group of servers.
Being most familiar with VPN's, my first thought was “this is going to be a breeze.” Build a couple IPSEC tunnels back to my sites routers, and be able to connect right up.
Upon researching what is involved in building a VPN from my site into a pool of Amazon EC2 servers, I found it looked a bit more complex than I originally thought.
There is great documentation from Amazon on the entire process, and if your working with this subject, I'd highly suggest checking this out:
The purpose of a Virtual Private network is to secure a group or pool of servers. This pool can then have point-to-point VPN connections built between it and business end-points such as their physical locations.
Once I had read all the documentation, I deployed my VPC and configured it. There's a few steps but their fairly simple. I deployed an instance, bound it to my new VPC security-group, opened some ports, created an Internet-Gateway (IGW) for my VPC, and a Virtual Gateway (VGW). I also specified my VPN end-point information. While doing the last step there, I was able to download a device-specific config file that would contain all the critical IPSEC configuration settings that my device would need.
When I read the documentation, I noticed that there seems to be somewhat limited device support for VPC. The choices where Cisco, Juniper, Yamaha, Generic
I decided to start out attempting the Generic device config. The site in question had Netgear RT314 VPN routers that where already running IPSEC tunnels between each other. I decided to attempt to build an IPSEC connection with them because that would be a cost effective way to establish a tunnel.
That didn't work. Period. No way possible. Why? Because the Amazon VPC absolutely requires your VPN endpoints to have BPG. They also require the use of dual-redunant VPN tunnels...
Next up: Forget the netgear RT314, I'll just build a linux box as my firewalls for the sites, and then use Strongswan or OpenSwan. There are some excellent documents online and discussions on that topic that we found, so it seemed like a viable thing to try.
The theory is that you deploy your instances to the VPC, get them talking locally (in the cloud) and then deploy a Linux instance such as Ubuntu, install Strongswan, and configure it as your Virtual Gateway in the cloud (VGW). Then Build another Strongswan box (or technically any other standard IPSEC type box could work), as your VPN endpoint. Simply configure a tunnel between then and your done. No BGP required because your not actually leveraging the VPC as your VPN end-point, you are simply forwarding IPSEC traffic from the outside, in to your Micro Ubuntu Strongswan VPC server which then routes your traffic into your VPC pool.
Problem is, we couldn't get it to work. After much digging and researching we finally found a blog that changed my mind about Strongswan. To sum it up, you can get Strongswan to work, however it will be unstable in a way that makes it virtually unusable except for the most least critical utilitarian tasks.
There is a big advantage to using systems the way they are designed to be used. The Amazon VPC is very strange in the way it's designed to establish VPN tunnels. When you build an IPSEC tunnel to a VPC Virtual Gateway, BGP is involved of course, but the other weird thing is that TWO individual and distinct tunnels are created. This complicates things considerably because most devices aren't designed for IPSEC tunnels to operate that way.
I finally decided there was no other way. I had to deploy it on Cisco. My budget was really tight though, so naturally I was concerned about costs.
I had a Cisco 871 handy. These are not considered very expensive units, their a few hundred dollars, well under a thousand each. They also had all the feature support I needed – BGP especially.
The problem was, BGP is considered an “advanced IP feature”. The 871 I had only came with the standard features so I had to purchase an upgrade. It cost about $100 to add that but money well spent if it solves the problem was my thought.
After flashing the unit and upgrading the features, we configured it onsite as a regular internet gateway and connected it up with a public IP and a LAN with an Ubuntu desktop behind it.
Now we configured the Amazon VPC side (VGW), and it gave us a perfect config file we simply added to our router.
It didn't take any tweaking. Both the VPN tunnels lit up green when I checked the VPC tab of AWS.
The Cisco also showed everything came up. Wow that was easy I thought.
First thing I tried to do? Well, I connected by SSH to the Cisco, and tried to ping the inside gateway of the VPC. No dice.
I had deployed a couple Microsoft Server instances inside of my VPC. I needed to RDP directly into one of them so I bound a public IP to the Internet Gateway (IGW) then tried to connect. No go. I checked the settings and found I needed to add a route to my VPC. 0.0.0.0/0 was required to be manually added to my routing table in the VPC tab before I could RDP into my instances in the VPC pool.
Once I was in my instance, I tried to ping my Cisco 871's inside interface over the VPN. No go.
We put an Ubuntu desktop behind the Cisco 871 and tried to ping the inside interface of our Server 2008 box inside the AWS VPC. Still no go. We ran a traceroute from our Ubuntu desktop to our VPC instance, and it returned the Amazon side BGP address... It was a 169.254 address. That told us that traffic was actually routing through the Cisco 871, and going all the way to the VPC but was being rejected.
We found that another route had to be added and that route was simply the IP network range of the remote side. All these routes that we added went in the Route Table in the VPC tab of AWS. None of them needed to be added to the Cisco and when we tried to manually create them, it seemed to cause things to drop.
We then had to properly configure our security group, and then traffic freely flowed over the tunnel.
We never could ping from our Cisco 871 to the VPC IGW but since we could ping from our physical Ubuntu desktop to our Microsoft Server instance, and we could RDP into the server, the problem was solved.
Overall, at the end of the day, deploying the AWS VPC by using Cisco Service Level routers is the best way to tap into this stuff. Just remember you can't use ASA or PIX firewalls as your VPN endpoints, only Service Level Routers. You also cannot NAT your Cisco router behind your firewall.

Wednesday, April 18, 2012

I can't log into my Amazon Web Server Instance anymore

What do you do when this happens to you? I have had it happen to me in a variety of circumstances, and here's how I handle it.

If you cannot access a Microsoft Windows instance in Amazon anymore because it appears that the user's password is being rejected, then try this:

Connect to the Amazon EC2 instance host name via RDP
Enter your username (administrator)
Enter your password
You get rejected
Connect to the hostname via RDP again
Enter your username - administrator@
enter your password
You get rejected
Connect again via RDP
Enter your username - administrator
Enter your password
You get connected

Even better - simply just enter your username like this \administrator
By appending \ before your username, it will blank the domain out.
When connecting to a Linux instance,  sometimes your encryption key gets out of sync. To fix that, I've been able to spool my linux instance down, shunt it to an AMI, then spool it back up as a fresh instance from the AMI and reassociate the encryption key to it. Then I could log in.

These are only a couple of example of access problems I've had to solve when dealing with Cloud instances. Sometimes they can be puzzling, but that's how new tech usually is.

-Brandon Cross

Friday, April 6, 2012

How I got taken by a virus today

I'm as careful as you can get when it comes to not clicking on the message that's got the virus embedded. I've been hit repeatedly recently with some very good attempts, and one click I made today sparked my AV software.

Now I get e-mails like this all the time - they look legit, and are scary things that try to provoke an emotional response. Stuff like "BBB Complaint" or "IRS deposit rejected" - stuff that you might see, and, being registered with the BBB and paying the IRS as a business - I might be provoked to respond too.

I wasn't ready for this one though.

A few months ago I made an order from Tiger Direct. I used an American Express Gold card and I paid several thousand dollars for the order.

Today I get this:
So what's interesting here is that this is obviously something that was phished - the amount is roughly right, the vendor, and the card are all correct. The transaction date being today, and the amount being substantial, made me think to myself "this message is legit - and indicates fraud on my account I better follow the link to chat with AMEX" and then BAM virus scanner kicks in.

Virus Spammers are not only getting more sophisticated, they are mining data from multiple sources, aggregating it, and sending highly targeted messages with embedded payloads that are likely to look very legitimate and provoke an emotional response, yielding a click through and payload trigger.


Thursday, March 29, 2012

Google plays Russian Roulette with Panda 3.4

For a while now, Google Panda has been running around. Matt Cutts said they where going to "take down" the over optimized backlink networks.

They sure did a good job. Lots of people received notifications in their Google Webmaster tools saying that had "unnatural backlinking" and their sites where to be de-indexed.

Not just a few guys either. It turns out, if you did any unnatural backlinking for the last couple years you might be in jeopardy. If you didn't do any unnatural backlinking at all, you might be in jeopardy. If someone builds backlinks unnaturally to your site, you also, might be in jeopardy.

So what is the solution?

Some personal favorite suggestions I've seen the last few days include things like:

Remove Google Analytics from your site.
Remove Google Webmaster Tools as well.
Do not submit for reconsideration
Do not remove your unnatural backlinks, if you even have control
Keep building backlinks
Vary your anchor text - and only use keywords for 20%-30% (or less) of your backlink anchor text
Diversify your keywords and backlinks more

Consider optimizing for Bing instead of Google
Check out entity based search




For a couple years now I have been thinking, "man, there sure are allot of people who make their living based on how Google sees them. Google has all this control over our sites and is embedded with all these great tools we don't think we can live without, what if they decide to do something small that imposes great economic pain on enough people, will that finally spark their down fall?"

What Google has effectively done is at least stirred the beehive here with this latest update. When the black hats are all abuzz with solutions like "rip out all Google tracking" and use alternatives, now there is a new movement starting that Google won't be able to stop.

It also is important to realize that backlinking still works, Google is constantly tweaking their algorithm and a small change that affects 1% of people still affects millions. Ultimately, what changes they do make tend to truly improve search experiences, however, their "acceptable use policy" of the web is almost a manifesto for how they would impose order upon the internet.

The question is, when they shake up peoples livelihoods in a big way, will they be able to weather the reaction or will they spark a revolt that will ultimately dilute their idea of perfect tasting water.

This story isn't done shaping up yet, but it is getting really interesting to watch.

Wednesday, March 21, 2012

Joomla 2.5 Complete Installation Tutorial

I sat down the other day and decided that it was time to release some of my Joomla training videos. I'm going to be dripping them out over the next few weeks so keep your eyes open but these are some powerful videos that I've put together to demonstrate how to use Joomla. I think it's time that everyone had a more pleasant Joomla experience, and I know many of my clients are already benefiting from these, so enjoy and hopefully what you learn can help you have a more pleasant Joomla experience!

Watch the Joomla 2.5 installation tutorial video

Monday, January 30, 2012

Megaupload fiasco reinforces dangers of the "Public Cloud"

Right now, every technology consumer is being inundated with service providers who want their data. The idea that data is a valuable asset and that the people who control your data will be very wealthy someday, is one of the big reasons why IT companies are building so much in the cloud. IT guys want this stuff, it makes our lives easier and it makes our customers happier. It lets us do things with technology that never before were on the table.

A very large, and probably the first, cloud sector out there is in "online storage". We run into these issues with our data - do we put it on our local computers? Then we have to protect our local data with backups. If we do that, then now we have another problem which is how do we share our data? Do we punch holes through every consumers firewall who wants to share photos and files? If we do that, how do we keep all these people secure? Well, the better choice seems to be - take it to the cloud!

Upload all your data and store it there, because you can share it with other people, keep it secure, and let other companies worry about how to back up their own systems which also includes my data.

Now all this sounds great, until a news story like this hits the ground

http://www.msnbc.msn.com/id/46190158/ns/technology_and_science-security/

Here we have a case and point for not storing data in the cloud.

What happened is a "cloud based file storage service" was raided by the FBI. All their data was frozen, however most of their data lives on other "cloud storage servers" that they lease from other companies. When the feds came in, they froze all these guy's assets and shut down their site immediately. Users had no warning, no time to even try to download their data, they simply woke up one morning and none of their data was available. This includes their private documents and private photos.
Since Megaupload leases storage from other companies, they must pay these companies cash every month to keep the data their storing. Guess what? Their cash accounts are frozen! So now, these "subcontractors" for mega uploads are faced with a decision - retain thousands, if not millions of terabytes of data they aren't being paid to store anymore, or delete the data and free the room up for other paying customers. The policy at these companies is quite clear - after so much time, unpaid data storage will be deleted.

This puts everyone (except the feds) in a precarious position. If for some reason the owner(s) of this company - who are now in jail cells, gets acquitted, then the only recourse that Megauploads has is going to be to sue the United States Government, because once the data is gone, it is gone forever.

In this case, the site in question probably won't win in court and the owner(s) are likely going to jail, but keep in mind, this is a Hong Kong citizen who lives in New Zealand who the US FBI flew helicopters up to his house and violently raided his home and took him into custody. I'm not saying this guy was a good guy, but he had his bases covered pretty well, and the FBI's one and only reason for "justification" was that Megauploads at some point was running some server or servers in Virginia. Probably, my guess is, those systems might have very well simply been hosted with Amazon's cloud infrastructure.

So this sets a very dangerous precedent where the law and technology will continue to put at risk the lives of every day users of that technology, because when the feds put a lock on the front door of your data center, and shut everything down, everyone who had anything there - legitimate or not, is at grave risk.

I think this is not the last we have seen on this story, and I think there are many more stories like this one to come.