Hacker Newsnew | past | comments | ask | show | jobs | submit | more edraferi's commentslogin

Navigation satellites are inherently geo limited. Can the Chinese system be used outside the Chinese mainland?


It depends on the orbit. AFAIK GPS, GLONASS, and Galileo have global coverage (my phone can see all of them in the US).


The Chinese system is BeiDou. If Wikipedia is to be believed, they will only achieve global coverage in 2020.

https://en.wikipedia.org/wiki/BeiDou


That is not my point. It can be Geo limited by the OS.

If !(in China) _disable_

Although even that will cause unnecessary headache for hardware and OS vendors and will/might still lead to new security issues and backdoors.


These headaches presumably are from the dizziness induced by introducing location checks inside the location determining code.


Problem is...you (the chip-maker) pay licenses to the satellite-owners, if your chip does not work outside china, no money. The Chinese 'GPS' is probably just for military or for the independence from other systems.


Yes, but hardware vendors need to decide if they put the chip in all devices and reduce manufacturing/design/testing costs, or put the chip on a Geo specific model and incur higher mfg/design/testing costs.

It's always a tradeoff and the cost of the chip in itself is not the sole decision driver.


I like Unsplash, but I don't understand their business model. What makes it sustainable?


Unsplash is just an image hosting place. The photos are contributed by users. Unfortunately, they no longer have a CC0 license.


What about pixabay?


I don't have much visibility into that but the Squarespace ad on their homepage is absolutely a clue :)


Any idea where I can get data on POSIX utilities ranked by use/importance?


you could start with the list of what's included in busybox:

https://busybox.net/downloads/BusyBox.html

the goal of busybox is to provided a stripped down posix userspace for use in embedded devices -- basically, provide enough posix utils to be able to do some useful shell scripting.


Here’s a standard list of Unix commands: https://en.wikipedia.org/wiki/List_of_Unix_commands

It is probably more comprehensive than what you were after though...


Start with file management utilities such as ls, cp, mv and rm.


Heh totally want to install this on a PineBook and give it to elementary school kids as their first personal computer.


Agreed. I've been trying to figure out what to give my son as a first computer. This would be a neat option.

If he wants to watch videos on youtube, he is going to have to code himself a browser first.


If you can run youtube-dl (hint: written in python), you only need a video decoder and player. If pygame doesn't give you that, though, you're in for trouble big time, because video codecs are hard. :)


also, he will need to type the whole program on the keyboard. like we did when we were kids copying tens of pages of basic from magazines.


Any other ideas so far? I’ve mulled over this question of best first computer for children. The best I’ve come across so far is a plain Linux installation with no desktop environment and a text-based typing program.


Looks useful, but it depends on PipelineDB, a PostgreSQL extension for streaming data. Unfortunately PipelineDB hasn't been updated since May 2019 [0] when they were acquired by Confluent [1]. The former PipelineDB team appears to be focused on Confluent's KSQL product [2]. There's an open source "ksqlDB" but it appears to depend on Kafka, so it's not a 1:1 replacement for PipelineDB[3].

[0] https://github.com/pipelinedb/pipelinedb

[1] https://www.confluent.io/blog/pipelinedb-team-joins-confluen...

[2] https://www.confluent.io/blog/confluent-cloud-ksql-as-a-serv...

[3] https://ksqldb.io/quickstart.html


Yes, I'm relying on the "continuous views" feature of PipelineDB which is like autorefreshing materialized views. I'm planning to swap PipelineDB with TimescaleDB in near future

Most of the heavy lifting is done by Postgres/PipelineDB with Node.js as a simple wrapper so it's both performant and consumes less resources.

http://docs.pipelinedb.com/continuous-views.html

https://docs.timescale.com/latest/using-timescaledb/continuo...


Looks good, but needs additional specificity / testability.

> No hidden costs

What is a "hidden cost"? Sometimes pricing is complicated. For example, shipping and sales taxes vary based on the customer's location.

> No making it difficult to cancel/unsubscribe from a plan

Maybe: "An authenticated user must be able to review their past and upcoming charges within 2 clicks from the default view. This page must provide immediate options for cancelling/unsubscribing (2 additional clicks to allow for confirmation)."

Or weaker: "Users must be able to cancel/unsubscribe by any mechanism that they can use to sign up / subscribe. For example, if users can purchase a subscription on the website, a they cannot be required to make a phone call to cancel that subscription"

> Automated emails to not self generated mailing lists/social platforms

I don't understand what this means. Are you trying to prevent the companies from using third-party advertising targeting? That seems like an unreasonable ask. It would prevent using Google/Facebook/Twitter for basic marketing tasks.

> No spammy follow up emails

This is not testable. It would be more valuable to identify quantitative best practices and publish those. e.g. "When a user cancels a subscription, do not sent them more than 1 marketing email per month asking them to re-subscribe"

> Allow recipients to easily unsubscribe from mailing-list emails

This should be covered by the "no making it difficult to cancel" clause.


Can you provide a more detailed critique of Django’s “Fat Models” recommendation? How would you prefer to manage this logic?


TL;DR Django models are the database, which makes them the wrong choice for presenting a service-layer interface to the persistence. They are inherently unable to hide, encapsulate, and protect implementation details from consumer that don't care or shouldn't be allowed to access.

The Django model is a representation of the database state. It's an infrastructure-layer object. It's is _very_ tightly coupled to the database.

Your business needs should not be so coupled to the database! While it is very helpful for an RDB to accurately model your data, a database is not an application. They have different jobs.

(The TL;DR of the following paragraph is "encapsulation and interfaces") Your business logic belongs in the "service layer" or "use case layer". The service layer presents a consistent interface to the rest of the application - whether that is a Kafka producer, the HTTP API views, another service, whatever. Your service layer has sensible, human-understandable methods like "register user" "associate device with user", whatever. These methods are going to contain business logic that often needs to be applied _before_ a database model ever exists, or apply a bunch of business logic after existing models are retrieved in order to present a nice, usable, uncluttered return value. Your service layer hides ugly or unnecessary details of the database state from the rest of the application. Consumers shouldn't care about these details, they shouldn't rely on them (so you can fix or change without breaking the interface) , and they very probably should not be presented direct access to edit whatever they want.

If you do not do this and instead choose the fat models method all of the following will happen:

1. You will repeatedly write that business logic everywhere where you use the models. You'll write it in your serializers, your API views, your queue consumers/producers, etc. You'll never write it the same way twice and you damn sure won't test it everywhere.

2. You'll get tired of writing the same thing and you will add properties or methods on the model. This is the Fat Model! This might be appropriate for convenience property or two that calculates something or decides a flag from the state of the model, but that's it. As soon as you start reaching across domains and implementing something like "register device for user" on the user model, or the device model, you are just reinventing a service layer in a crappy way that will eventually make your model definition 4000 lines long (not even remotely an exaggeration).

3. Every corner of your application will be updating the database - via the model - however it wants. They will rely on it! Whole features will be built on it! Now when it's time to deprecate that database field or implement a new approach, too bad. 20 different parts of your app are built on the assumption that any arbitrary database update allowed by the model is valid and a-ok.

Preferred approach:

1. Each domain gets a service layer, which contains business logic, but also presents an nice reliable interface to anything else that might consume that domain. This interface includes raising business logic errors that mean something related to our business logic. It does not expose "Django.models.DoesNotExist" or "MultipleObjectsReturned". It returns an error that tells the service consumer what went wrong or what they did wrong.

2. The service layer is the only thing that accesses or sees the Django models aka the database state. It completely hides the Django models for its domain from the rest of the application. It returns dataclasses or attrs, or whatever you want to use. The models are no longer running rampant all over the application getting updated and saved willy nilly. The service layer controls what the consumers in the rest of the application can know and do.

You will write more boilerplate. It will be boring. You will write more tests. It will be boring. But it will be reliable and modular and easier to reason about, and you can deliver features and changes faster and with much less fear of breakage.

Your business logic will live one place, completely decoupled, and it can be tested alone with everything else mocked.

How your consumers (like API views)turn service responses and errors into external (like HTTP) responses and errors, lives in one place, completely decoupled, and can be tested alone with everything else mocked.

Your models will not need to be tested because they are just a Django model. They don't do anything that's not already 100% tested and promised by the Django framework.


We started moving off "fat models" at my job and onto DDD (service methods, entities, etc.), and I have to say after a year I'm not a fan. Here are my beefs:

1. If you're not using models, it's a lot of work to stay fast.

If you've got a Customer instance, and you want to get customer.orders, you've got a problem if it's not lazy. If it's a queryset, you get laziness for free, if it isn't you have to build it yourself. God help you if you have anything even remotely complicated. You also need trapdoors everywhere if you want to use any Django feature like auth, or Django libraries.

2. You have to build auth/auth yourself

Django provides really nice auth middleware and methods (user_passes_test).

3. Service methods only do things something else should be doing.

You might be doing deserialization, auth/auth checks, database interactions, etc. All of that stuff belongs at a different layer (preferably abstracted away like @user_passes_test or serializers).

4. The model exposed by Django and DRF is actually pretty good, and you'll probably reimplement it (not as well)

The core request lifecycle is:

request -> auth -> deserialize -> auth -> db (or other persistence stuff) -> business stuff -> db (or other persistence stuff) -> serialize -> response

We've reimplemented all of those layers, and since we built multiple domains we reimplemented some of them multiple times. It probably would've been better to just admit "get_queryset" and the like are good ideas.

5. Entities are a poor substitute for regular objects and interfaces.

We've mostly ended up wrapping our existing models in entities, but just not implementing most of the properties/fields/attributes/methods. But again, we have to trapdoor a lot, we have trouble with laziness and relationships in general, and we have a lot of duplicate code in our different domains.

6. We have way too many unit tests.

Changing very small things requires changing between 5-10 tests, each of which use mocks and are around a dozen lines at least. Coupled with the level of duplication, this has really slowed us down. They also take _forever_ to run.

FWIW I think you're right about jamming too much into models; I think that works at a small scale but really breaks down quickly. I think at this point, my preferences are:

1. Ideally, your business logic should be an entirely separate package. It shouldn't know about HTML, JSON, SQL, transactions, etc. This means all that stuff (serialization, persistence) is handled in a different layer. Interfaces are your friend here, i.e. you may be passing around something backed by models, but it implements an interface your business logic package defines.

2. The API of your business logic package are the interfaces you expose and document. The API of your application is your REST/GraphQL/whatever API--that you also document.

3. Models should be solely database-specific. If you're not dealing with the database and joins and whatever, it doesn't go in models and it doesn't go in managers.

4. Don't make a custom user model [1].

5. Serialization, auth, and persistence should be a declarative and DRY as possible. That means class-level configuration and decorators.

6. Bias strongly against unit tests, and rely more strongly on integration tests. Also consider using them during development/debugging, and removing them when you're done.

Does that seem reasonable to you? I spend a lot of time thinking about this stuff, and I would like my life to be less about it (haha) so, any insight you can give would be super appreciated.

[1]: https://docs.djangoproject.com/en/3.0/topics/auth/customizin...


I think we're agreeing on the majority of this. We have not chucked DRF or Django auth or anything. We've just created service layers to take the business logic out of the API views, API serializers, and DB models.

Each action looks like

1. Request arrives into the app, auth happens using DRF on the API view. This is all using Django & DRF built-ins.

2. In the API view: request data gets serialized using DRF serializers, but no calculated fields or model serializers or other BS. JSON -> dict only. The dict does not have models in it, only IDs: profile_id, reservation_id, whatever. Letting the "model Serializers" turn a JSON location ID into a Location model is how you get 10 database queries before you've done _anything_. At this point we don't care if the location_id is valid. We are just serializing.

3. Still in the API view: Dict dump from the serializer gets shoved into whatever format you're going to send to the service layer. For us this is often an attrs/dataclass. If we're calling the "Reservations Service" method "create reservation", we pass in location_id, start time, end time, and the User model. The User model in this case is breaking our policy of not passing models through the service boundary, but it's the one exception for the entire code base, because it's too useful not to take getting it for free from DRF's user auth. We would be basically throwing it away then re-calling for it in the service layer which is dumb.

4. Call the Reservations Service layer. The service layer is going to do n things to try to create the reservation. If it needs to insert related records, like in a transaction, cool. Its job is to provide a sane interface for creating a Reservation, and whatever related side effects, not to only ever touch the Reservation model/table and nothing else. The base of our Domain is Reservation, creating a ReservationReceipt and a ReservationPayment are entirely within scope. Use the Payment model directly to do this if there's zero extra logic to encapsulate, or create a Payment service if you have a ton of Payment-creation logic you need to extract/hide from the Reservation service. You can still manage it all in a transaction if you want. The point is that the caller (the API layer) doesn't see this. It only sees that it's calling the Reservation Service.

5. The Reservation service will either return a dataclass/attrs objects representing a successful Reservation created, or raise a nice business error like ReservationLocationNotFound (remember when you passed in a bad location id to the API, but we didn't want to check it in the API layer?)

6. API View takes the service response & serializes it back, or takes the business error and decides which HTTP error it should be.


Got it, yeah that makes sense. At a previous job, we invested pretty heavily in model serializers, but yeah they’re bonkers slow. Thanks for weighing in, really nice to talk about this stuff with someone with a lot similar experience.


I’m just getting started with Nix, with the explicit goal of making my entire system defined in a private , remote git repo. I want to be able to rapidly re-provision my entire user environment on a new machine, including applications, preferences, etc.

For the moment I’m doing this on MacOS with the nix package manager. I’ll eventually move to NixOS. I tried to run NixOS in VirtualBox, but couldn’t get screen resizing to work despite using the official ISO which is supposed to have the appropriate extensions installed.

My current hurdle is exactly the topic of this thread: non-binary configs. For example, what am I supposed to do with .zprofile? I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations. I’m concerned that might cause problems with MacOS system ZSH. I can probably fix that with a custom login shell?

Anyway it’s fun, but complicated and diversely documented. Gonna take a while to sort it all out.


My current hurdle is exactly the topic of this thread: non-binary configs. For example, what am I supposed to do with .zprofile? I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations. I’m concerned that might cause problems with MacOS system ZSH. I can probably fix that with a custom login shell?

I use both NixOS and macOS. You can take two routes: 1. you can continue using Apple's /bin/zsh and just use a .zprofile generated using Nix (e.g. home-manager). Generally, the differences between zsh versions are not that large and it just works. This is what I have been doing with my Mac. 2. You could change your shell, either system-wide, or just for Terminal.app to ~/.nix-profile/bin/zsh.

I’ll eventually move to NixOS. I tried to run NixOS in VirtualBox, but couldn’t get screen resizing to work despite using the official ISO which is supposed to have the appropriate extensions installed.

If you have some leftover hardware, try it! NixOS is a different experience altogether and cannot be paralleled by Nix on macOS or a Linux distribution. Being able to declaratively define your whole system is insanely cool and powerful. Fully reproducible machines. Also, you can try out stuff without any harm. Just switch back to the previous working generation (or try the configuration in a VM with nixos-rebuild build-vm) if you are not happy.


> I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations.

Nix supports lots of approaches, with a varying degree of "buy in". I wouldn't say you're "supposed" to do one thing or another, although some things would definitely be non-Pareto-optimal (i.e. you could achieve all the same benefits with fewer downsides).

In the case of .zprofile, I would consider any of the following to be reasonable:

- A normal config file sitting on your machine, edited as needed, not version controlled.

- A symlink to a git repo of configs/dotfiles (this is what I do)

- Directly version-controlling your home dir in some way

- Writing the config content as a string in a Nix file, and having Nix put it in place upon rebuild (I do this with files in /etc)

- Having a Nix 'activation script' which copies/symlinks config files into place (this is what I do, to automate symlinking things to my dotfiles repo)

- Wrapping/overriding the package such that it always loads the desired config (e.g. by replacing the binary with a wrapper which prepends a "use this config" flag).

--

The following has nothing to do with your question, but I felt like ranting about a tangentially-related topic; it's not directed at you ;)

I often see "extremism" when Nix is brought up; e.g. if someone wants help managing non-Python dependencies of their Python app, and someone recommends trying Nix, it's often dismissed along the lines of "I don't have time to throw away my whole setup and start over with the Nix way of doing things, even if were better". The thing is, using Nix can actually be as simple as:

    (import <nixpkgs> {}).runCommand "my-app" {} ''
      Put any arbitrary bash commands here
    ''
I often treat Nix like "Make, but with snapshots". Nix 2.0 turned on 'sandboxing' by default, but if you turn that off you can do what you like: add '/usr/bin' to the PATH, 'wget' some arbitrary binaries into '/var', etc. You won't get the benefits of deterministic builds, rollbacks, concurrent versions, etc. but those don't matter if the prior method didn't have them either. Projects like Nixpkgs, and the experimental things people write about on their blogs, aren't the only way to do things; you don't have to throw the baby out with the bathwater.


> I just wish I could buy a Zen 2 chip in a laptop that doesn't look like it was made for a fourteen year old (no offense to any fourteen year olds). I heard someone say the lack of 4k and more professional style laptops could be Intel back-channel fuckery,

Lenovo just released a couple ThinkPads peered by AMD Ryzen 7 Pro CPUs: T495, T495s and X395. They don’t have 4K screens though.

https://arstechnica.com/gadgets/2019/05/lenovo-adds-amd-ryze...


The T495 / X395 series is based on Zen+ (Ryzen 3000 mobile) chips and not Zen 2 (Ryzen 4000 mobile) - I know the naming is confusing given that the 3000 series desktop chips are Zen 2. However, Lenovo has announced ThinkPads based on Zen 2, such as models in the rebranded T14 lineup, but they have not been released yet.

https://www.anandtech.com/show/15772/new-lenovo-thinkpad-ran...


4K screens on laptops don't make sense to me. They are too small to get maximum value out of. 1440p is more than enough.


Hmm. You might want to get an eye test. Not being snarky: when I was 45 or so, having had "perfect" eyesight, someone suggested I get mine tested. Turns out most people loose the ability to focus at short distances with age, but the brain doesn't clue you in. Not being able to tell the difference between 1440 and 4k would be consistent with this. For me even at 13" screen size 4k is very obviously better for coding.


With a 55" screen you'd have to be within 3.5 feet to be able to see a difference with 4k. So no eye test needed. Your brain is either lying to you or the 4k screen you tested with is better than the non-4k you used.

http://carltonbale.com/does-4k-resolution-matter/


There are many things incorrect about that blog's approach. As others have pointed out, this isn't like HD Audio, where people literally cannot tell the difference between normal CD quality and HD Audio in any realistic test. I can absolutely see the difference in a 4K screen, and I can tell if even a smallish laptop screen is 4K or not from much higher distances than normal usage. I remember the first time I saw an Apple Retina display on a tiny laptop from across the room and said "holy shit!" out loud and walked over because I could see how sharp it was from meters away.

First of all, 20/20 vision is the average, not the best. Many people have substantially better than 20/20 vision. I remember laughing that I could read the super fine print "copyright notice" at the bottom of the eye test that is about 1/3rd the size of the smallest font in the test itself.

Secondly, the eye is complex, and has surprising capabilities that defy simple tests, which are designed as a medical diagnostic tool, not as a test of ultimate resolving capability. For example Vernier Acuity (1) means that much higher resolution printing (and screens) are required than one might think based on naive models.

1) https://en.wikipedia.org/wiki/Vernier_acuity


Actually, I don't have sources on hand, but I believe average human sight is better than 20/20. It's just that 20/20 was decided on as a standard for "good enough". I believe stemming from a military standard set long ago.


I don't know what the methodology of that site is, but it's certainly flawed when it comes to computer monitors. Based on their calculator, I shouldn't notice the difference between 1080p and 2160p when sitting 2 feet from my 16" laptop monitor, but the difference is night and day. I don't want to get into a philosophical debate, but if I can see the difference that your equation says I shouldn't be able to see, the equation is wrong, not reality.


This reminds me of the pervasive "your eyes can only see 24fps" myth, I guess people crave evidence that what they have is "good enough" and others are just being elitist?


Well, you will definitely see more than 24 fps, but that might or might not translate to better experience. If you want the cinematic effect for a movie, it will be 24 fps, otherwise you will get the soap opera effect.

For other uses, like fluid animations or games, you want as high as possible.


I wonder if the "Cinematic" look at ~24fps seeeming less tacky than the "Soap Opera" look at ~60fps has just been trained into us via familiarity though.

If we lived in an alternate universe where cinema was all 60fps and soap operas were 24 would we think that 24fps looked tacky instead?

On the other hand I think there's definitely some objective effects in play here too - CGI is a lot easier to make convincing at a lower framerate and added motion blur.


Peter Jackson thinks so. He pushed for 60fps in his movies even though people complained. His theory is that once people acclimate, they will get a better experience.


4k is more than resolution too, has more colors that can't be shown in regular RGB formats.


You're conflating two separate things - HDR is the specification for > 8 bit dynamic range, 4k just specifies 3840 x 2160 resolution. You can have displays that are HDR but not 4k, and vice versa.


I absolutely benefit from a 4k screen even in a small form factor. 768p is "enough" in the sense that we all got stuff done on such screens for many years, but the increase in text rendering quality with higher PPI screens is tremendously worth it to me for the reduced eyestrain. 4k is still noticeably better than 1440p. I wouldn't be surprised if 8k is noticeably better still (although with swiftly diminishing returns of course).


The reverse is also kinda true... Many people when they say "I like high resolution" mean "I like to fit lots of stuff on the screen at once".

If you're in the latter crowd, you can configure X or Wayland to render to a 4k screen buffer, and then downscale to fit the actual screen. Yes, downscaling no longer means 1 pixel=1 pixel, which introduces some blur, but unless you're a 20/20 vision kind of person, I doubt you'd be able to tell without your nose touching the screen...


These technologies could be used to create a valuable network of low-distraction reference resources. They might dodge the Eternal September problem simply due to the the high barrier to entry and win mindshare among sophisticated users by delivering real value.

This would require a community focus on tooling and content.

First, common tools must support Gopher/Gemini output:

* Pandoc * Template engines like Jinja and Mustache * Static site generators like Hugo, Gatsby & Jekyll * Webservers like Apache and Caddy

Then those tools can be used to mirror content like:

* Wikipedia * ReadTheDocs * Docsets (eg for Dash) * HackerNews, Lobste.rs, Reddit? * RFC archives * Github/Gitlab repo READMEs

Suddenly you could spend your whole day in pure-text mode and never open a browser that does enough twitter to kill your flow.


There is currently a partial lobste.rs mirror hosted at typed-hole.org over gemini (as well as gopher). Gopher also has gopherpedia which mirrors wikipedia (many clients for gemini also support gopher). While I dont think mirroring should be the end goal, I agree it is a good stepping stone to original content to have , as you put it, low-distraction access to these resources.

Recent work has gone on to allow git cloning over gemini and there is syntax support for the, admittedly minimal, text/gemini format in vim and emacs.

I would love to see pandoc support (and believe someone recently mentioned working on on the mailing list). Great ideas! Pick one and build!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: