It’s not fun, I got hacked through an archived git repo, for when I was learning to use AWS, following tutorials and whatnot.
Forgot about it for years, then out of nowhere got hit for 27k…needless to say I said good luck collecting that shit.
They waived it all granted I logged in and deleted all resources that were running as well as removed all identities. Sure as hell I did that and saw a ton of identities out in the middle of nowhere. Fucking hackers ran up a shit ton of AWS sagemaker resources trying to probably hack some dude’s wallet.
Every time I see a tutorial on how to deploy x in AWS, I get pissed. The newbies need to learn about administration before they start deploying shit on cloud infra.
I’m always a bit paranoid about my google compute account. Opened it many years ago, ran a few instances for a few dollars for a few months, had enough, oh look there’s no easy “delete just my google compute account” button.
Unhooked all the payment methods, shut everything off, turned out the lights, but it seems I can’t leave the building.
Funny thing I had a paranoid freakout too before I got hacked on AWS, I had bought a visa gift card and that’s what I put in as a payment card on AWS. Of course they know where I live and could still screw me, but they would have to do it on their own dime.
They make it really hard to leave or just use a specific service only. I use them for DNS, objectively it’s supposed to be cheap AF pay yearly, but now I have to pay $2 a month just to do all the auxiliary stuff to notify me that I got hacked.
I’m buying a server rack soon and just got a full symmetric fiber line put in so I can do my own hosting.
Everything is so intertwined, and that’s the way they like it. Do I trust some random support bot/person in Google to unhook and delete my compute account from my google identity and not accidentally trash the rest of my 15 year identity with Google/Gmail? Hell no. So my compute account still sits there idle.
I guess it bolsters their metrics, that’s nice for them I suppose.
oh look there’s no easy “delete just my google compute account” button.
I’ve had conversations with Google employees about this problem. I know you’re speaking about your situation with essentially a homelab, but for business customers its more complicated than it seems on the surface. Many customers aren’t savvy enough to understand “just delete anything that is generating a bill” also means all of your data stored on Persistent Disk or in GCS.
I especially hate that this culture now made its way into the corporate world too. It’s now normal and expected that a developer will just have to follow one of the AWS tutorials to get the thing going and leave it like that.
Nobody thinks about how they’re going to compose their resources anymore, all the AWS “experts” just spit out their AWS training verbatim without any thoughts of their own.
Nobody thinks about how they’re going to compose their resources anymore, all the AWS “experts” just spit out their AWS training verbatim without any thoughts of their own.
There are absolutely AWS experts that will give comprehensive answers and solutions, but many times those don’t get hired because there’s this other guy that’s cheaper and says we can “do it for a fraction of the first guy”.
Yeah they do exist, I just think they’re also usually not the ones that carry all the (mostly useless) certs. Those certs are designed to maximize profits for AWS, not to optimize for best bang for the buck. And the ones that do get the certs get them because they want to be hired and have little else to show. But companies treat those certs like they’re university degrees.
You’re not going to get those certs by answering “Don’t use AWS Private CA, you can use OpenSSL in a Lambda to make them for free and save hundreds every month” or “Don’t use the AWS VPN because they charge per client connections and session duration, just set up a t4g.nano with WireGuard and it’s just as good and costs only a couple cents a month for a proper 24/7 always on VPN for the whole dev team”. The “correct” answer is obviously that using a managed service is always better.
Even the AWS advisors they give you for free with your big enterprise contract are basically glorified salespeople for AWS.
Are there good AWS experts out there? Absolutely! I’m just pointing out the industry heavily favors producing the wrong kind of expert. The good experts know their shit regardless of the cloud or what your servers run. And those get turned down because of salary or simply failing to answer some AWS trivia that would take 10 minutes to look up and understand.
The part that makes me the most paranoid is the outbound data. They set every VM up with a 5 Gbps symmetric link, which is cool and all, but then you get charged based on how much data you send. When everything’s working properly that’s not an issue as the data size is predictable, but if something goes wrong you could end up with a huge bill before you even find out about the problem. My solution, for my own peace of mind, was to configure traffic shaping inside the VM to throttle the uplink to a more manageable speed and then set alarms which will automatically shut down the instance after observing sustained high traffic, either short-term or long-term. That’s still reliant on correct configuration, however, and consumes a decent chunk of the free-tier alarms. I’d prefer to be able to set hard spending limits for specific services like CPU time and network traffic and not have to worry about accidentally running up a bill.
It’s not fun, I got hacked through an archived git repo, for when I was learning to use AWS, following tutorials and whatnot.
Forgot about it for years, then out of nowhere got hit for 27k…needless to say I said good luck collecting that shit.
They waived it all granted I logged in and deleted all resources that were running as well as removed all identities. Sure as hell I did that and saw a ton of identities out in the middle of nowhere. Fucking hackers ran up a shit ton of AWS sagemaker resources trying to probably hack some dude’s wallet.
Every time I see a tutorial on how to deploy x in AWS, I get pissed. The newbies need to learn about administration before they start deploying shit on cloud infra.
I’m always a bit paranoid about my google compute account. Opened it many years ago, ran a few instances for a few dollars for a few months, had enough, oh look there’s no easy “delete just my google compute account” button.
Unhooked all the payment methods, shut everything off, turned out the lights, but it seems I can’t leave the building.
Funny thing I had a paranoid freakout too before I got hacked on AWS, I had bought a visa gift card and that’s what I put in as a payment card on AWS. Of course they know where I live and could still screw me, but they would have to do it on their own dime.
They make it really hard to leave or just use a specific service only. I use them for DNS, objectively it’s supposed to be cheap AF pay yearly, but now I have to pay $2 a month just to do all the auxiliary stuff to notify me that I got hacked.
I’m buying a server rack soon and just got a full symmetric fiber line put in so I can do my own hosting.
Everything is so intertwined, and that’s the way they like it. Do I trust some random support bot/person in Google to unhook and delete my compute account from my google identity and not accidentally trash the rest of my 15 year identity with Google/Gmail? Hell no. So my compute account still sits there idle.
I guess it bolsters their metrics, that’s nice for them I suppose.
I’ve had conversations with Google employees about this problem. I know you’re speaking about your situation with essentially a homelab, but for business customers its more complicated than it seems on the surface. Many customers aren’t savvy enough to understand “just delete anything that is generating a bill” also means all of your data stored on Persistent Disk or in GCS.
I especially hate that this culture now made its way into the corporate world too. It’s now normal and expected that a developer will just have to follow one of the AWS tutorials to get the thing going and leave it like that.
Nobody thinks about how they’re going to compose their resources anymore, all the AWS “experts” just spit out their AWS training verbatim without any thoughts of their own.
There are absolutely AWS experts that will give comprehensive answers and solutions, but many times those don’t get hired because there’s this other guy that’s cheaper and says we can “do it for a fraction of the first guy”.
Yeah they do exist, I just think they’re also usually not the ones that carry all the (mostly useless) certs. Those certs are designed to maximize profits for AWS, not to optimize for best bang for the buck. And the ones that do get the certs get them because they want to be hired and have little else to show. But companies treat those certs like they’re university degrees.
You’re not going to get those certs by answering “Don’t use AWS Private CA, you can use OpenSSL in a Lambda to make them for free and save hundreds every month” or “Don’t use the AWS VPN because they charge per client connections and session duration, just set up a t4g.nano with WireGuard and it’s just as good and costs only a couple cents a month for a proper 24/7 always on VPN for the whole dev team”. The “correct” answer is obviously that using a managed service is always better.
Even the AWS advisors they give you for free with your big enterprise contract are basically glorified salespeople for AWS.
Are there good AWS experts out there? Absolutely! I’m just pointing out the industry heavily favors producing the wrong kind of expert. The good experts know their shit regardless of the cloud or what your servers run. And those get turned down because of salary or simply failing to answer some AWS trivia that would take 10 minutes to look up and understand.
These services should have default billing alerts and limits you have to actively change.
I’d settle for just the limits, personally.
The part that makes me the most paranoid is the outbound data. They set every VM up with a 5 Gbps symmetric link, which is cool and all, but then you get charged based on how much data you send. When everything’s working properly that’s not an issue as the data size is predictable, but if something goes wrong you could end up with a huge bill before you even find out about the problem. My solution, for my own peace of mind, was to configure traffic shaping inside the VM to throttle the uplink to a more manageable speed and then set alarms which will automatically shut down the instance after observing sustained high traffic, either short-term or long-term. That’s still reliant on correct configuration, however, and consumes a decent chunk of the free-tier alarms. I’d prefer to be able to set hard spending limits for specific services like CPU time and network traffic and not have to worry about accidentally running up a bill.