Hacker Newsnew | past | comments | ask | show | jobs | submit | tarequeh's commentslogin

Congratulations to the django team. Having used django for many of my projects for over a decade, it's exciting to see the django community is still going strong and the core members are continuing to push out significant releases such as the 3.2 LTS.


Beautifully written and thought provoking. It's interesting how easily strangers can ask such personal questions. I wonder how different the interactions would have been if instead of the eye patch, the author wore a pair of trendy non-prescription glasses with a clear left and a dark right lens.


It's hard for me to understand that why NPR would give up their journalistic freedom to receive the briefing one day early. Is it that kind of a deal-breaker to report an FDA announcement a day later than some others?


In a perfect world, NPR would have done their article a day late, with the proper quotes from those affected, and would have explained in a footnote that this information would have been out a day earlier except that NPR could not ethically agree with the FDAs terms which were blah, blah, blah.


Then, for the next story, the FDA wouldn't even tell NPR about the possibility of the briefing, so NPR would start researching their story when other news organizations published their stories. The NPR would be release its story several days after the people who had the scoop, and fairly likely, its article would sink and vanish.


Handling errors over REST API is something I've struggled with. What's the best way to handle errors? Data validation errors will be different from system/server errors. Tough to establish a universally applicable error response structure.

I used to be in favor of sending 200 responses with error codes but now gravitating back towards relaying the most relevant HTTP error & letting the clients handle it.


Any app of decent size will probably end up passing HTTP's error codes. 200 + error is OK, except it can mess with caching. A 5xx with details in the response is fine.

You might be tempted to map some errors, like "item not found" to 404, and so on. But you still need to provide the real error code. So you're not gaining much.

Honestly, I don't get the obsession with using HTTP features to represent part of an API. It never saves work; you're writing a custom client each time anyways. From a purely code perspective, you're going to deserialize the body or a context object from a header. Moving that data into multiple headers can only require more code, not less. Same for verbs. I've never gotten any benefit beyond GET-for-read, POST-for-write.

Elasticsearch is a good example. The URL space is overloaded with special things, allows you to create objects you can't reference, and so on. They use verbs, except you still have extra parameters tacked on. There's zero benefit to me, the user, of them making it REST like.

Maybe if REST-someone creates a machine usable spec like WSDL (just "simpler") then all these HTTP headers could be put to use.


The advantage is that there is some level of standardization.

404? That means the entity doesn't exist. 302? I should look somewhere else. 401? The server doesn't know who I am.

Accept? I can specify the format. ETag? I can get a faster response if I include the token in the next request.

This stuff is really, really common, and people can learn your API very quickly. A transparent caching server can improve performance.

Sure, with a custom protocol you can get a tight system. Hell, write your own transport layer for even more control. But it will take longer to learn and harder to interoperate.


The time spent reading that 404 in this case means "the object ID isn't found" versus "this path doesn't exist" pretty much negates any benefit - you still have to include sub codes. Same for "access denied because token expired" vs "invalid token". Not mention all the stuff that'll get crammed into 400/500/503.

If your app is simple enough that all errors map 1:1 to HTTP, great. Or if it doesn't need that level of error management. Otherwise HTTP just confuses the issue.


So, you just want to explain the error further? Wonderful. RFC2616

> the server SHOULD include an entity containing an explanation of the error situation

---

The 3-digit status code tells consumers (1) the status category (success, redirect, client error, server error) and (2) a more specific status within that category. It does that in a way that doesn't require me turning to your API docs every 3 seconds.


WSDL-like descriptors for REST-like APIs:

[1] Swagger: http://swagger.io/

[2] RAML: http://raml.org/

[3] WADL: https://www.w3.org/Submission/wadl/


A big reason for using HTTP error codes and methods is transparency—you can easily see what's happening from looking at the server log.


Send the most appropriate HTTP status code, along with an error resource the client purports to understand (by accept header content negotiation).

If the client isn't declaring to accept a mediatype you produce, you send your fallback error format, which could be anything you choose: text/plain, or some custom format you design, or some generic hypermedia type that defines an error field.


Perhaps look at HTTP Problem Details? https://tools.ietf.org/html/rfc7807


You could build a video conversion website where one uploads original high resolution video and it'd spit out 1080p/720p/320p and other formats of video that are suitable for delivering on different devices/bandwidth. This could be an alternative for people hosting video on youtube and getting slapped with ads. An effort like this would use a lot of CPU but once it's converted it's just the storage cost. Common challenges are copyright issue but I can see different ways to promote it as a professional service than collection of random videos. $100K would cover cost of offering it for free to people for a limited period.


As strange as it sounds, that much bandwidth would probably suck up $100k faster than you'd imagine. There's a reason a lot of large companies that deal in video have their own data centres and crazy bandwidth deals that make it cheap. I don't think $100k at Amazon's prices would last more than a few months if the service became popular.


That sounds pretty much identical to Zencoder, encoding.com, etc.


This is a difficult problem a lot of us face. I personally have a habit of glancing through hundreds of articles that pop in my RSS reader everyday. Often after reading a handful of those articles, some of them providing certain bits and pieces of knowledge, I feel like it would have been better to focus on a single topic and learn more about it. That's where I miss books. Although I have many of them sitting in the shelf right next to me, it has become more difficult for me in the recent years to pick one up and start reading. I think it's time for a change. Thanks OP for sharing this.


One of the great tests for me personally is can I even remember the last 10 headlines or articles I read? The dopamine hit is in the discovery, not necessarily the comprehension.


L-Theanine. Recently went through an interview spree. Helps me get rid of nervousness and focus on solving problems. Naturally found in tea leaves but obviously in small dosage. Can't confirm side-effects but sometimes gives me headache if I drink beer on the same day, particularly IPAs!


If you get good quality sencha, especially the (somewhat pricey) Gyokuro, you get a high dose of L-theanine and an absolutely delicious taste. One of my favorite pleasures in life!


For a second a tried to imagine a satellite view of the earth with many of these flying. It'd look mesmerizing. I really applaud Google for funding research on affordable renewable energy. It's unfortunate that the scrolling doesn't work, but hey the prototype does!


You mean an airplane view? From the top down I imagine they wouldn't look very interesting.


Really cool exercise in CSS. My first thought is that texts displayed using this font can't be selected/copied & won't be screen reader friendly. Might make for a great way to display your email or phone number on a site without worrying it would get scraped by marketers.


Great idea. You could even generate a random mapping of the letters -> CSS classes on each load, making it very difficult for bots to decipher.


Would it really though? Bots can beat CAPTCHA.


Obviously it won't make it impossible to decipher, just harder (and perhaps not worth the effort). If the line order of each statement is also randomized and useless attributes are added, they'd have to pretty much compile the CSS and compare the attributes.

The alternative way to beat all email obfuscation techniques is of course taking a snapshot and running OCR on it, but certainly not worth the effort/CPU time.


Not as much effort as you might think. Command-line rendering of html (including css and js) and taking a screenshot is not very difficult. If a site used this method, and someone is highly motivated to break that captcha, the effort put in would be nothing compared to the benefit of bypassing the "security".


They could just make a map of the class attributes. It would be easy.


I thought this too at first, but then I realize that an image would do the job just as well.


Seriously, this is the stupidest thing I've ever seen. Why are people pretending this would be useful in any way at all?

I see how its a fun exercise in CSS and everything, but actually providing it as a download as if people will use this productively is stupid.


>this is the stupidest thing I've ever seen

Lol obligatory XKCD!

http://xkcd.com/1497/


And if you want an infinitely scaling image, you could even use an SVG.


Actually, you could put a character inside the div and use the old "text-indent: -9999px;"-trick: Copy and screen readers fixed.


Screen readers fixed, but how are you going to select to copy?


If you can select to copy its no different than a screen reader. There's is 0 usefulness to this, and it's embarrassing that it's on the front page of hacker news right now.


I have been trying to find out if the single-stroke text used by 'Hershey'text extension in Inkscape is visible to screen reader's/scraper's/bot. This type of text font is used in Blueprint's.


Nice idea. Even just preventing the information to get sucked up during search engine indexing and cached in the description can be a valuable side effect.


It made me think that it's already wearing a space suit, no wonder they survive in space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: