Hacker Newsnew | past | comments | ask | show | jobs | submit | simcop2387's commentslogin

My understanding (probably wrong) is that pcmcia was based off the ISA bus and then pc card updated to pci based and express card was pcie


Close! The PC Card rename was because people were confusing the name of the association with the specific form factor.

PCMCIA and PC Card = ISA

CardBus = PCI and ISA - slot was backwards compatible so you could use a PC Card in a CardBus slot

ExpressCard = PCIe


That's also not a perfect recollection, but is what my recollection was until I was looking up this history in the past week and found this nugget and posted it elsewhere. Quoting myself:

>So we know these were originally called PCMCIA cards, then later PC Cards, right? Well, I think I might have found the first mention of PCMCIA in PC Magazine. It is in a Dec 1991 column by Dvorak where he "introduces" the "PCMCIA PC-Card". Here's a quote, "In fact, the card should be referred to as the PCMCIA PC-Card, or the PC-Card for short. PCMCIA is the Personal Computer Computer Memory Card International Association (Sunnyvale, Calif., 408-720-0107), and it's the governing body that has standardized the specifications for this card worldwide. JEIDA works with the PCMCIA; it's specifications are identical."

>So at least according this Dvorak column, these were ALWAYS properly called "PC-Cards" (he used a hyphen), but early on people definitely were calling them PCMCIA cards and I remember the shift to everyone later (much later than this 1991 column) calling them PC Cards.


Neat, definitely a part of history that I'm not familiar enough with myself since I was only ~6 or so around then when the article was published.

It definitely seems to reinforce the joke backronym of "People Can't Memorize Computer Industry Acronyms" for the whole thing given how badly it was all refered to. It's a lot like the whole Clippit/Clippy situation with the Microsoft Office assistants. Originally it was only named Clippit but Clippy got coined by everyone else and even Microsoft ended up giving in and using it in marketing materials not too long after the fact.


Ah, completely forgot about CardBus. That was a fun time when we also had NuBus kicking around on some older Macs, too.


The Pi 3B doesn't have UEFI support, so it requires special support on the distro side for the boot process but for the 4 and newer you can flash (or it'll already be there, depending on luck and age of the device) the firmware on the board to support UEFI and USB boot, though installing is a bit of a pain since there's no easy images to do it with. https://wiki.debian.org/RaspberryPi4

I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.

There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.


Pretty reasonable place to start. I'm curious how it would fare in an emc/rf test.


It is a reasonable place to start. So much so that autorouters have been around for practically as long as computers have, and they've been better at it than people for most of that time.

The only reason people usually route PCBs is that defining the constraints for an autorouter is generally more work than just manually routing a small PCB, but within semiconductors autorouting overtook manual routing decades ago.


it is surprising (or not?) that there is such a vast gulf in terms of automated tooling between the semiconductor world and pcb routing world.

i guess maybe there are less degrees of freedom and more 'regularity' in the semiconductor space? sort of like a fish swimming in an amorphous ocean vs. having to navigate uneven terrain with legs and feet. the fish in some sense is operating in a much more 'elegant' space, and that is reflected in the (beautiful?) simplicity of fish vs. all the weird 'nonlinear' appendages sticking out of terrestrial animals - the guys who walk are facing a more complicated problem space.

i guess with pcbs you have 'weird' or annoying constraints like package dimensions, via size, hole size, trace thickness, limited layer count, etc.


Semi by hand got out of hand (HA!) in the nineties. There is simply too much work for humans (millions of transistors) so we swallow performance hit. Synthesis puts stuff together from human optimized basic building blocks. Same reason FPGA tools quickly advanced from schematic input to hardware description languages.

With PCB its all still quite manageable, even something like whole PC motherboard is easily doable by two-three EEs specializing in different niches (power, thermals, high speed digital design).


It's the opposite; semiconductor design is so full of constraints that it's difficult to do manually. Usually individual transistors and gates are laid out manually, then manufactured in a test chip, and empirically analyzed in every way possible, then those measurements are fed into the router, and it creates a layout of those blocks that allows enough signal propagation to work at the high speeds used in modern semiconductors.


For IoT myself i'm wondering if it's something that could be thrown into the Matter side of things, make the hub/border router act as an ACME server with it's own CA that gives out mTLS certs so the devices can validate the hub and the hub can validate the devices. It'd never be implemented properly by the swarms of cheap hardware out there but I can dream...


But why?

There's no reliable source of truth for your home network. Neither the local (m)DNS nor the IP addresses nor the MAC addresses hold any extrinsic meaning. You could certainly run the standard ACME challenges, but neither success nor failure would carry much weight.

And then the devices themselves have no way of knowing your hub/router/AP is legitimate. You'd have to have some way of getting the CA certificate on to them that couldn't be easily spoofed.

EDIT: There is a draft for a new ACME challenge called dns-persist-01, which mentions IoT, but I'm not really sure how it helps that use case exactly: https://datatracker.ietf.org/doc/html/draft-ietf-acme-dns-pe...


Largely management, observability, and then the way that docker mucks with firewalls. Running them this way will allow proxmox to handle all that in the same way {I assume) as the LXC and VMS so automation, and all the rest can be consistent


Likely due to areas that still have only 2g coverage. Still a lot of that in rural usa


Is that still true? Even 3G support was largely torn down in the US some years back.

https://www.eseye.com/resources/blogs/2g-3g-network-status-u...

https://www.pcmag.com/how-to/the-3g-shutdown-how-will-it-aff...


What areas are those? From some quick research, the only carrier left that provides 2G coverage is T-Mobile but they're phasing that out this year.


Just because the map shows you can get 5G (or 4G) does not mean you'll actually be able to use that network. It's tricky and telecom companies like to play these bullshit games. It's pretty similar to how they'll advertise "up to X MBPS" internet speeds but the average speed is far lower.

You'll actually have these experiences in congested cities. Ever go to a concert and realize you don't actually have cell service? That's because the tower is fully occupied. Unfortunately phones might not report this to you and might not report the downgrade. Making Android and Apple complacent...


Sometimes, lots of companies will lock down WSL and similar because they can't as easily control what's running in it for security or policy reasons. In those cases putting would be easier to audit and deal with since it's much more single purpose


Same usecase for myself too. One of the biggest advantages for me is that it lets me setup a single and easily tested place for the users to reset passwords from too for when they inevitably forget or lose the post-it note. That, along with me using all the apps and not wanting to have to change 30 passwords for everything when something happens too.

I went a bit more complicated myself with Keycloak instead of Authentik, simply because I knew keycloak a little better but setting up SSO for all the stuff I run has definitely been worth it.


Listen, there are Top Men in charge of keeping these things safe. Top Men.


This is one reason that I'm still upset about the failure that SCTP has ended up. It really did try to create a new protocol for dealing with exactly all of these issues but support and ossification basically meant it's a non-starter. I'd have loved if it was a mandatory part of IPv6 so that it'd eventually get useful support but I'm pretty sure that would have made IPv6 adoption even worse.


Well we have QUIC now which layers over UDP and is functionally strictly superior to SCTP as SCTP still suffered from head-of-line blocking due to bad acknowledgement design.


As long as you're fine with UDP encapsulation, you can definitely use SCTP today! WebRTC data channels do, for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: