Funnily enough, that was my computer science capstone project back in 2010!
I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.
If someone can sell it for $100 they'd make some serious money. The birds are fouling my pool, and the plastic owl does nothing. Right now I'm thinking it should make a loud noise, or launch a tennis ball randomly. The best part is I can have it disarm if it sees a person.
My thought is just to rent it out for to rich folks with lawns for a few hundred bucks a week. My contraption will have thermal detection, AI target discrimination, and precision targeting with a laminar flow water stream. That’s the plan, anyways.
It's not, the HTTPS site has the HSTS header, so your browser will always redirect to the HTTPS version even if you try the plaintext port. Gotta clear your browser cache, or try another browser.
It allows servers to specify that browsers should never even attempt to make an unencrypted request to the site and instead silently convert any such requests to encrypted requests.
This header is good for security but it’s also convenient for old sites that don’t want to update their existing links. They can upgrade the whole site to HTTPS without any content changes.
That stands for HTTP Strict Transport Security. Its a http header that basically tells your browser to only connect to this website via HTTPS/TLS for a configurable amount of time.
Its a protection mechanism that prevents encryption stripping man-in-the-middle attacks.
Don’t do this. What if something strips the header between the client and your server? Always upgrade to HTTPS. Not doing so isn’t worth supporting 25 year old browsers
That depends what your website is. If it's for some commercial or sensitive thing then yeah, just HTTPS is okay. But if it's something of yours (and isn't just done to get you hired) then the downsides of HTTPS-only outweight the benefits. HTTP+HTTPS is perfect for human persons even if it's not for corporate persons.
You're basically making it so that people can only visit your site if a third party corporation wants to maintain an account with you. There are benign organizations like LetsEncrypt but it still means giving up control to an entity that will eventually go bad. Just look at what happened to dot Org.
And of course you prevent even moderately old systems from interacting with your web server. Depending on your accepted TLS cypher set you're probably excluding software from as late as 2017 by going HTTPs only.
It's like wearing level 3 body armor when you go out to the park to walk to the dog. There are some people who have lives where that's necessary, but it really isn't for most. And the downsides outweigh the admittedly very safe protection.
> You're basically making it so that people can only visit your site if a third party corporation wants to maintain an account with you.
I don’t know about you but people can only visit my site if a “third party” maintains an account with me… and that third party is my ISP.
The web, even self hosted sites, isn’t some direct person to person contact network. It relies on a wealth of protocols and a community backing it.
Now to be fair, I do upgrade everyone but I don’t do so because of security concerns.I do it because the protocol inconsistency occasionally shows up in my logs, and sometimes browsers block APIs based on if you are HTTPS or not. It’s be nice if they didn’t but browsers are yet another third party in between my severs and my end user.
> What if something strips the header between the client and your server?
Then that something would would equally likely intercept your initial HTTP request and serve you a TLS stripped version of the website.
The real solution to this problem is for browsers to never implicitly make plaintext HTTP requests via the address bar. In general, they have become too clever in intepreting the content of the address bar. Firefox, for example, will gladly change the name and try a variety of protocols of the sort-of-address I'm requesting if it doesn't get a response to its initial request. I don't know if it's the case still, but it even used to blindly append ".com" to the name you entered in some cases, going so far as request an entirely different domain.
I don't know what name will be resolved or what protocol will be used, and it may depend on network conditions (for example, Firefox will add "www." to the URL if the server happens to be down the moment I request it).
This makes the address bar unpredictable, unreliable and unsafe. It is beyond me why it has been made such a complex problem. I guess it's more forgiving? I am wary of software that so readily trades security for convenience.
I'm so sorry, I'm not 100% sure I understand this comment.
Would you mind explaining what you mean by asking others to adjust for your sake in the context of mobile web accessibility? I'm having trouble connecting it to the content of the article. I'm not the writer, but would love to better understand your point of view here.
I’ve been living in backend-land for so long I’ve never actually used the <picture> tag. I will have to take a stab at it and see how the legacy browsers treat it, because if I don’t have to use GIFs, I won’t.
As for the @media tags, I do utilize them to a degree, just to make everything render nicely on mobile and to support dark-mode. But (to put it cheekily) I’m more concerned with backwards compatibility than forwards compatibility :p
> As for the @media tags, I do utilize them to a degree, just to make everything render nicely on mobile and to support dark-mode. But (to put it cheekily) I’m more concerned with backwards compatibility than forwards compatibility :p
I definitely recognized that! My thought was take that backcompat focus further and relegate whatever forward compat you do choose to support not to @media queries, but the media attribute on link tags[1]. Why should HTML 4 browser users download dark mode CSS they can’t use? ;)
> Externally linked sources introduce a backward-compatibility problem of their own. E.g, I recall early versions of IE only supporting inline JS. So, if you want to support as much as possible as far back as possible, inlining is the way to go.
That was part of the reason for inlining the CSS. The other part (which I didn’t explain in the post) actually came about because of Mosaic.
Even though it didn’t support CSS, it was aware of link tags and added a button to the top of the window for each one (literally linking to the referenced file). I couldn’t dig up a way to disable that within the code, so I went with the commented-out inline method to get the experience I was looking for.
Regarding Mosaic: One way around would have been outputting the stylesheet links via JS `document.write()` and user-agent filtering, since every browser that supports CSS also supports JS. Anyways, an interesting detail about Mosaic!
P.S.: Now I'm not sure, if this would generally work in the head section, since with older browsers the `document` object became only available as the body tag was encountered. Or was this just for the properties and `write()` was available anyways? (This behavior changed with NS4/IE4.)
I'm inclined to agree. Without any evidence or context, these arguments rarely serve any purpose, and are most often driven by strongly-held opinions rather than objective analysis.
First of all, congrats! Whether you accept it or not, it's always an honor to be recognized.
I did a technical review for a small O'Reilly e-book, so my experience is probably a "lite" version of the process, but it mostly consisted of general fact checking and verifying the completeness of the content. The process was pretty straightforward: they sent me the book, I read it a few times, added relevant comments to the document, and sent it back.
Overall a positive and interesting experience that I would definitely do again.
It is surprisingly easy for us to rebrand into different industries. We did a test with the golf course industry a little while ago and it only took a few hours to go from idea to working product.
I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.