The perfect example for how bloated, inefficient, and wasteful the bureaucracies of higher education can become.
How did $40m of fraudulent spending go unchecked for literally years? It seems to be that even at the largest private sector organizations this kind of malfeasance would have been caught much sooner.
Yale, like much of US higher ed, is even more dependent on Federal spending and government tax breaks than ostensibly private firms like Lockheed Martin or Raytheon, who make products that only the government can buy.
It's not like it's Choate or Groton or St. George's highschool, where tuition and gifts need to cover expenses (and are still tax-free).
Raytheon, Lockheed Martin and Yale are all private sector organizations. Being largely dependent on government spending and tax breaks does not make you a public sector organization.
Being largely dependent on government spending and tax breaks does not make you a public sector organization.
True in only the most reductive and naive sense; but this is the sort of mistake that ostensibly private orgs like the ones above count on, in order to avoid the kind of scrutiny & accountability that an explicitly public agency must endure.
Beyond a certain limit (~50 employees), the left hand loses track of the right hand. This is and will be the status quo, unless we legislate limits on corporations from growing too big. In other words, this is unlikely to change, because such limits should also be applied to government.
Yeah, no. Companies that care about costs absolutely track what they’re spending and where. I’ve seen plenty of merely questionable expenses a fraction of this caught and called out at very large companies.
Indeed, but that's also true for any large organization, whatever their status: public, private, nonprofit.
I witnessed large sums being spent for very dubious reasons in private firms, mainly in order to meet ridiculous demands by executives. Show-off features despite no interest from customers, "visionary product" e.g. implementing a blockchain or "artificial intelligence" that classic algorithms could do as well for much cheaper.
The idea that private entities pay more attention to money than public entities is a prejudice. At least in modern developed democracies.
Financial scandals in private firms tend to be muted in order to preserve the external reputation: hence the impression that public entities are less careful.
It's all the same everywhere, no general rule in that matter.
Both can be true. Companies will question every little nickel and dime certain employees spend and then have big stupid horrible blind spots in other places. Happens all the time. I was a PhD student at Yale and they were extremely touchy about certain kinds of spending, and then would also do stupid stuff like this.
Any more info on "can be"? For existing AirTags, they would have to already have that functionality (polling for updates). I can't find anything that says they do.
Their firmware can probably be updated in the same mysterious way AirPods firmware is updated.
Roughly, be in the presence of an iDevice for a certain amount of time under unknown conditions. The advice on the internet is usually something like "leave your AirPods charging and have your phone connected to them when you go to sleep, and they'll probably be updated in the morning".
I’d be surprise if Apple fielded AirTags without any way to update their firmware. I doubt it would be automatic though, you’d have to push an update to them from an iDevice.
Then you'd have the problem of new air tags not working until they'd been updated. It's something of a minor problem, but unless people actually misuse this enough to be a worse problem, I don't see why Apple would update to disable this.
It's a much cheaper device than Airpods, harder to manage battery life, and there's not yet evidence that they can update them automatically. As far as I can tell, competitor products (Tile, for example) don't update firmware automatically...it's a user-initiated thing.
I can't find anything that shows OTA firmware updates of the tags themselves happening. Yes, you could tweak the iPhone, but if a "emulated tag" looks exactly like a "real tag that can't be updated", you're somewhat limited.
Rate limiting would help with the "hijacking the network to send your own data" piece in the original article.
It wouldn't do much for other uses, like tracking people without their knowledge. A "faked AirTag", could, for example, rotate it's serial number to avoid triggering Apple's "AirTag Found Moving With You" feature. Or the opposite of that. You could stick a fake device on someone's car and trigger the "AirTag Found Moving With You" warning over and over by periodically changing the serial number after the user suppressed the warning for a particular AirTag.
Presumably a valid serial number for each AirTag is something that can't be guessed? That's how it is with Apple's other products. Each serial number has some entropy in it and there's no way to generate a valid new one.
Well, at companies that actually care about 100% rollout of "corporate, invasive crapware" on computers that _they own_ and that are used to process _company data_ and access _company resources_, the alternative is usually just to ban Linux workstations altogether.
I see this as a strict improvement for adoption of Linux workstations in the corporate world
I'm not aware of ubuntu now supporting "rollout of corporate crapware" via this AD support.
MSIs still dont work on Ubuntu. GPOs are still very limited-- mostly just HBAC type stuff.
The stuff that works is the important stuff, but if people want to roll out software to Linux they aren't doing it with AD. They're doing it with a CM tool like puppet or ansible.
Personally think AWS ECS is a 3rd place contender. Would add sprinkles to it if only they'd allow yaml files vs json configs in the aws-cli. ecs-cli and copilot are alright. Generally prefer to stay as close as possible to aws-cli
Having worked at a company running Docker swarm at... medium(?) scale... I have witnessed a truly shocking variety of bugs in its network stack. I always wondered if it was something specific to the company setup, but the result is that we just couldn't stay on a platform that would randomly fail to give containers a working network connection to each other.
For over 90% of workloads kubernetes is an overkill. Only when company is reaching google scale kubernetes make sense.
A good alternative to kubernetes is LXD [1] or just stick with docker compose. Kubernetes except for managed services from cloud providers is more difficult than an average application to manage and a huge cost in itself to run and maintain.
> For over 90% of workloads kubernetes is an overkill. Only when company is reaching google scale kubernetes make sense.
I hear people repeating this truism all day, but from practical experience, it doesn't seem to be the case - Kubernetes is paying dividends even in small single-node setups.
K8s single noder and GKE user here, can confirm I wouldn't remotely even consider going back. Deploying an app takes 5-10 minutes at most first time around, new pushes <1 minute, and when there is ops bullshit involved, it is never wasted work, and never needs to be repeated twice.
I hated Kubernetes ops complexity at first, but there really isn't that much to it, and if it's too much, a service like GKE takes 70%+ of it away from you
For a different perspective, learning Kubernetes and using it widely gives you a universal set of tools for managing all sorts of applications at all scales. https://news.ycombinator.com/item?id=26502900
Sure, this is kind of true, but only in the most depressing way possible. Kubernetes is overengineered and terrible but it's also just about the only game in town if you want a vendor-agnostic container orchestration system.
This is the same situation as Javascript circa 2008: "Learning this absolute dogshit language and using it widely will give you the ability to write universal applications that will run in browsers everywhere and make you very employable."
You're not wrong about k8s today, and wouldn't have been wrong about JS in the past, but boy is it a sad indictment of our industry that these things are true.
Kubernetes is engineered to solve the five hundred different problems that different people have in a way that works for them. If you don't do that then everyone will complain about their feature or some edge case being missing and not use the system (see comments on mesos in this thread). That's not over engineering, that's the required amount of engineering for this sort of system.
But also, k8s "secrets" are not, in fact, secret, you can't actually accept traffic from the web without some extra magic cloud load balancer (cf https://v0-3-0--metallb.netlify.app/ maybe eventually) or properly run anything stateful like a database (maybe soon).
Forget covering "everyone's" use cases: From where I'm sitting, k8s is an insanely complicated system that does a miserable job of covering the 95% use cases that Heroku solved in 2008.
It's great that k8s (maybe) solves hard problems that nobody except Google has, but it doesn't solve the easy problems that most people have.
Yes, yes, there's a million zillion Kubernetes ingress things, none of which are really good enough that anyone uses them without a cloud-provider LB in front of it. Also they only deal with HTTP/S traffic. Got other types of traffic you want to run? Too bad, tunnel it over HTTPS.
If you want a picture of the future of computing, imagine everything-over-HTTPS-on-k8s-on-AWS stomping on a human face forever.
> Got other types of traffic you want to run? Too bad, tunnel it over HTTPS.
Then you’d expose a service, not an ingress. You can do this in a variety of ways depending on your environment.
I’m going to go out on a limb here and say you’ve never really used k8s and haven’t really grokked even the documentation.
It’s complicated, some parts more than others, but if you’re still at the “wow guys secrets are not really secret!1!” level I’m not sure how much you can really bring to the table in a discussion about k8s.
That only works to access the service from other pods inside k8s, it doesn't help you make that service accessible to the outside world. Tell me how you'd run Dovecot on k8s?
> if you’re still at the “wow guys secrets are not really secret!1!” level
I'm just pointing out one (of many) extremely obvious warts on k8s. You act like this is some misconception on my part, but it's not that silly to assume that a secret would be.
But to answer your smarm, yes, I've used Kubernetes in anger: my last company was all-in on k8s because it's so trendy, and it was (IMHO) an absolute nightmare. All kinds of Stockholm-syndrome engineers claiming that writing thousands of lines of YAML is a great experience, unable to use current features because even mighty Amazon can't safely upgrade a k8s cluster....
> That only works to access the service from other pods inside k8s, it doesn't help you make that service accessible to the outside world. Tell me how you'd run Dovecot on k8s?
Quite the opposite. I’d re-read the docs[1]. Specifically this page[2]. If you’re on AWS you’d probably wire this up with a NLB and a bit of terraform if you’re allergic to YAML. Seems like a 5 minute job assuming you have Dovecot correctly containerized.
> I'm just pointing out one (of many) extremely obvious warts on k8s. You act like this is some misconception on my part
It’s hard not to point out misconceptions like the one above.
> ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
Internal only.
> NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Nobody uses NodePort to expose external services directly, and I think you know that.
> LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
As I mentioned above in this thread, requires cloud provider Load Balancer.
> ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
> Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.
This one's a new one to me, and apparently relies on some special new widgets.
Anyway, if you love k8s, I'm sure you'll have a profitable few years writing YAML. Enjoy.
> This one's a new one to me, and apparently relies on some special new widgets.
ExternalName has been around since 2016.
> Nobody uses NodePort to expose external services directly, and I think you know that.
Sure they do. Anyone using a LoadBalancer does this implicitly. If you don’t want k8s to manage the allocated port or want to use something bespoke that k8s doesn’t offer out of the box then using a NodePort is perfectly fine. You can also create a custom resource type if you’ve got some bespoke setup that can be driven by an internal API.
The happy path is using a cloud load balancer, because that’s what you’d use if you are using a cloud provider and you’re comfortable with k8s wiring it all up for you.
Has your criticism of k8s evolved from “I’m unclear about services” to “well yes it supports everything I want out of the box but uhh nobody does it that way and therefore it can’t do it”?
My criticism of k8s is it's an absolutely batshit level of complexity[0] that somehow still fails to provide extremely basic functionality out-of-the-box (unless paired with a whole cloud provider ecosystem, but then why not just skip k8s and use ECS???). I don't think k8s solves real problems most developers face, but it does keep all these folks[1] getting paid, so I can see why they'd advocate for it.
Nomad is vastly superior in every way except for mindshare; Elastic Beanstalk or Dokku is superior for most normal-people use cases.
> Nobody uses NodePort to expose external services directly, and I think you know that
I do. It provides a convenient way to integrate our non-k8s load balancers (TCP haproxy tier with a lot of customization) with services on kube. This is good for reusability and predictability while we slowly migrate services from our prior deployment targets to k8s.
Your points mostly only matter if you're running on bare metal. If you're in the cloud then you've got load balancers and databases covered by your cloud provider. I need Kubernetes to handle problems that I already have great solutions for. I want it to handle the problems that my cloud provider provides poor or very specialized (ie: lock in) solutions for. Which for me it does very well and a lot more easily than doing so without Kubernetes.
edit: Kubernetes secrets are also either good enough (ie: on par with Heroku) or your cloud provider has an actual proper KMS.
> For over 90% of workloads kubernetes is an overkill.
It's not. Take any simple web app and deploy it into managed GKE with Anthos and you automatically get SLI's and the ability to define SLO's for availability and latency with a nice wizard. Takes a few minutes to setup.
The amount of engineering needed to achieve good SLO monitoring dwarfs the engineering needed to run a simple app so it just never happened. That's no longer the case.
> Only when company is reaching google scale kubernetes make sense.
Also obviously not true given the number of companies deriving value from kubernetes.
Your statement already support that without the blessings of engineering team of Amazon, Google, Microsoft, Digital Ocean and various managed kubernetes service it's impossible for a reasonable small team to manage and monitor k8s and all of this service comes with lock-in and additional capital outlay.
Obviously for a Google Cloud Partner, more people are tied to gcp and kubernetes, higher the revenue. Its secondary if it's really necessary for an application to require k8s.
I'd imagine most small to medium companies would be running things on a cloud service using managed kubernetes. It seems mostly larger companies that are sticking with non-cloud hardware and services.
The advantage of kubernetes in that case is that there is a large ecosystem of helm charts, guides, documentation, etc. Deploying something new from scratch is fairly easy since someone else has done all the leg work and packaged it all up.
just because something is packaged does not mean it’s usable. YMMV but this is how security horror stories start. Someone ran a container they had no idea where it came from, happily used a helm chart. Most of the times it’s not even malicious - it’s outdated software because “it just works”
In my experience all the common helm charts and docker images are regularly updated. If you don't update your installation of them then you also wouldn't update a docker compose or LXD.
I don't know about this. I caution people against microservices architecture all the time, but at my company having a scheduler just made sense. We spin up and down queue workers by massive amounts every hour, and doing this with anything besides a scheduler would be really tricky.
Granted, we use Nomad, not k8s, but we definitely need a scheduler and definitely are not reaching Google-scale.
They'd have to re-license Open Distro to SSPL instead of Apache 2.0 to remain complaint, but I don't know if Amazon would be willing to do that. Considering they have to solve the hosted Elasticsearch problem anyways, I think either a fork or a one-off deal with Elastic is on the horizon.
It means that any “derivative work” will need to also open the management layer under SSPL. The management layer is AWS, so this puts OpenDistro in a tight spot. I’m not sure forking would work - as Elasticsearch evolves Amazon would not be able to copy features anymore. In the search space, this would be a very hard pill to swallow.
The source code for the various components that make up Open Distro is already freely available under an Apache 2.0 license. This change will have zero direct impact on Open Distro. The SSPL restrictions apply when the licensed software is used to provide a service.
It is the AWS Elasticsearch Service that will be directly impacted. It will be limited to Elasticsearch 7.10.x as a foundation. Unless of course AWS makes available the code that they use to orchestrate and manage that service. Assuming that that code is sufficiently uncoupled from other systems, they could perhaps do exactly that. It would certainly be an entertaining counter-move from AWS.
It looks like the way the license is written, they would have to release the source of the entire AWS console, and possibly everything that is AWS.
IMO, the SSPL's cloud provision is a "Japanese No", it is so wide in possible interpretation, that only the Eclipse foundation could actually provide such a service.
they’d still be able to innovate their own fork, but they will no longer be able to pull future upstream elastic code into it. so it becomes more of a hard fork. if elastic continues to innovate it will be difficult for open distro to remain competitive with it.
I don't think AWS really cares. They "bought" (the devs of), and killed, blazegraph to make Neptune. And since, Neptune doesn't really evolve much either so both projects are stagnating. Enough to say they can do "graphs" but not enough to really do anything useful...
Yeah at my last gig we used AWS’ offering. Blue green deployments would hit a race condition and the new cluster would never come up, requiring a ticket to be filed. Important APIs and settings were disallowed.
Horrible service. That’s why it’s ironic that Elastic just shot themselves in the foot with this licensing change. So foolish.
The SEC trusts them to regulate themselves with the understanding that they had better run a tight ship, or the heavy hand of government will come down.
Yes, I think that would be a valid way to bypass the protection.
With physical access you can bypass just about any protection given enough money and time. In a data centre context, the damage you can do is rapidly minimised by rapidly increasing the amount of capital and time required to access more of the DC.
The more important change is that without this feature, malware could theoretically install itself into the firmware without requiring physical access. Now it should be just about impossible to break the chain of trust without a person physically tampering with the machine.
Note: I should mention that I think this is such a massive double edged sword (maybe double edged shield is a better term). This lets you build a threat model that accounts for everything up to physical access. This however also has such a massive opportunity to be an incredibly anti-consumer feature that I fear to see how it will be used. I wish they would have required a physical switch to enable/disable the feature. I do however understand how adding such a feature could complicate its implementation quite a bit.
How did $40m of fraudulent spending go unchecked for literally years? It seems to be that even at the largest private sector organizations this kind of malfeasance would have been caught much sooner.