Hacker Newsnew | past | comments | ask | show | jobs | submit | cyberneticcook's commentslogin

If I may ask where did he buy the app from? Some online marketplace ? I’d love to browse these kind of deals.


Sorry for the late reply: he’s been in the maratime industry his whole career. Met the founder years ago, stay in touched, then made the ask.


Could this be used to ahead-of-time compile Python code ? I'm more interested in getting to a native executable or library.


How far away are Unikernels from being adopted in production ? Does it make sense to consider them for a project starting today ?


I understand your frustration but I think pre-orders are important for them to understand how many units to manufacture. What if they produce too many, and people don't buy them ? what if too little and they can't cope with the demand ?


Another way of looking at it is that the delivery horizon is too long. In six months there will be other similar products announced and probably released. I fell like 2015 so so far out that the value of pre-order as a forecasting tool is minimal.


Acceptable and calculable business risks. How did new products launch before the internet?


I can think of an example of an industry that has been doing pre-sales to fund production for a long time and that's property development.


That's what market research really is good for. And frustrating potential users to minimize your business risks isn't acceptable, in my opinion. There's always risk involved. That's the nature of business.


"That's what market research really is good for."

Pre-orders can be an excellent form of market research.


Don't think you can consider it research if you are billing people, it's just business/sales.


It seems like getting people to commit to pre-orders is possibly the best form of market research. You've done enough R&D to get a solid outline of what the product will be, and you're testing whether or not people are actually willing to put money on the line to get ahold of it.


The biggest issue is that we don't own our data. It's stored in Google, Facebook, Twitter, LinkedIn etc.. servers. It should work the other way around, every individual should keep his own data and provide permissions to external services and other people to access it. Is there any project looking into this direction ? How do we reverse this situation ?


It's not your data, just because it's about you.

In this case the article in question was published by a newspaper, in fact due to legal requirement. So the law here is totally messed up. It says the data must be published, but must not be findable. A farce.


> It's not your data, just because it's about you.

sure, I agree.

In this case I think the mistake was made by the Ministry of Labour and Social Affairs who didn't understand the consequences of publishing one person's sensitive information on the internet. In my utopia though, the newspaper would have asked (and forcefully obtained in this case thanks to the Ministry authority) permission to Mr Gonzalez to access his sensitive information, only for the amount of time required by the bidding process.


It is your data - Facebook, Google etc are just custodians of your data and you have the right (under EU law) to have that data removed or anonymized.

The legal requirement is for the information to be published (once) and then, in the normal course of events "forgotten" after a period of time.

It is not the publication that is a concern, it is the permanence of it, and the ease of access that the ruling deals with.


Umm, we used to call those "personal websites". They were the norm during much of the 1990s.

They did take some time, effort, skill and money to set up, but they did give far more control over what information was shared, and who had access to it.


that's not what I meant. For example when using a mobile app, you can give access to your contact list, your position, your pictures etc...All data is stored on your phone and each application must get permission to access your data. What I would like is to extend this model to the web, so that any service that require some information from me will have to ask permission to get it rather than having me store a copy on their database.


If you are asking whether anyone working is working on a truly distributed social network, then the short answer is that a number of projects are trying but no-one has been very successful so far.

It does seem like this is a matter of time, though. We managed to build the entire Internet as a system of systems that work together remarkably effectively despite no one authority having universal control of everything.

The main thing keeping sites like Facebook safe today is the critical mass of customers they have, and the fact that current sentiments (and the occasional multi-million dollar marketing effort) encourage newcomers to put their data "in the cloud" in return for not paying any money to store it and not having to worry about the technical details. There are certainly technically viable alternatives.


A distributed social network would be probably part of it but still doesn't successfully describe my idea (utopia might be a better term..). I would like any service to ask permissions for any piece of data that I control. For example my bank wants to see my last payslip ? sure ask for the proper permission and for how long you need it. Dentist require a proof of my address ? no problems. It will allow me to change my address or phone number for example without having to notify all services (electricity company, bank, doctors, amazon, etc..). I'm actually in the process of moving house again and this is so annoying, why do I have to notify everybody to update the data they have about me ? Wouldn't it be better if they could just ask me about my data at the time they need it ?


I copy and paste URLs all day long too. I use CMD+L to select the whole URL and CMD+C to copy it. No big deal.


Great for you with your 'keyboard', perhaps not so good for other input devices.

I do actually have an irritation, when I'm in Chrome and I do a CTRL+L and CTRL+C, and then I go to paste what I think I've copied, actually I have an unexpected http:// at the front of what I've copied. Sure that's what most people want, but it isn't what you see is what you get.


I think it's dangerous to rely solely on a unit test suite to guarantee the correctness of your program. Each unit-tested component may pass its own tests, but that doesn't guarantee that by interacting with other (present and future) components it won't exhibit any bug. I guess immutability would go a long way forward in realizing this guarantee, but I've never had the pleasure of working on a software where each component was truly immutable and independent of each other.


Very true. Here's an example of a bug that has bitten me more than once, has everything to do with immutability, and isn't reasonably within the scope of traditional testing tools:

A crucial class writes files to a certain folder. The test sandbox provides such a folder, and the class knows to use that folder when it's running in test mode. Obviously, the production environment is also supposed to provide that folder. But oops, it doesn't!

The tests exercised writing a file to disk, the tests were all green, and the system crashed anyway.


And this is - IMO - where the magic happens. It's somewhat doable to "waterproof" your unit tests; but getting every (possible? Required? Anticopated?) interaction tested?

Magic.

I'm not saying tests are evil, I actually love my test suites. But as always; as soon as someone uses ALWAYS and/or NEVER, those tiny hairs in the back of my neck do a sort of wave and I try to ignore their message, which is a shame since (in this saga) both Uncle Bob and Nephew David have good advice to offer.

As for immutability; add in referential transparency, static ish typing and you've got yourself a nice development setup ;)


I always progressively test at different levels before insta-deploy, and run a set of "is this still OK" tests against production post-deploy.

Unit tests are at the heart of the test gauntlet, but absolutely can't be trusted by themselves.


> otherwise what exactly are you testing?

it depends, what do you want to test ?

If you are testing a single function, why would you care whether the data comes from the database, the network or from a mock ? All you need is to verify that the given function produces the expected output given the right (and wrong) inputs. Being able to test this way also makes easy to keep the components in your software decoupled.

My philosophy is to use mocks for unit tests and the real thing for integration tests.


so what kind of tool would you recommend instead to deal with 100TB-1PB per day ? I'm genuinely interested.


There are commercial database solutions focused on IoT that have no problems with these workloads (e.g. my company, SpaceCurve, or Pixia) but nothing open source. A single rack of servers arranged as a parallel system with a 10GbE switch fabric can support it if you design the software correctly.

If you look at every company that is working in this space, one of the first things you will notice is that they all use custom storage engines that do a full operating system bypass i.e. they manage all the system resources in userspace. If you do not do this, you cannot reliably get the necessary throughput out of the system for IoT. As far as I know, no scalable storage+execution engine in open source is designed like this yet. It requires much more computer science sophistication and lines of code to implement compared to traditional storage engines, so not the kind of thing you hack together over the weekend.


Cassandra is open source and used in a large number of IoT applications. Most famous one being Nest.

Also as far as throughput Netflix is doing 1.5 trillion (yes trillion) transactions per day in production on Cassandra.


This is a great example of not understanding the scaling problem of IoT. Not only is the above (quasi-)transaction rate modest by IoT system standards today, but Cassandra is not doing real-time analysis or ad hoc querying of complex relationships across those transactions at the same time, which is usually a requirement.

I know of a production IoT system in the private sector that does 1.5 trillion (quasi-)transactions every 10 minutes, so almost three orders of magnitude higher throughput. Cassandra is an okay choice for storing IoT data but it isn't real-time in the sense that you can do immediate, fast queries about the relationships across those records as they are hitting the system.


"Cassandra [...] isn't real-time in the sense that you can do immediate, fast queries about the relationships across those records as they are hitting the system."

That depends on how you are using Cassandra. Typically, you are expected to know your query patterns up front, and so you will lay your data out accordingly when ingesting. When done properly, this allows for ~1ms queries that return completely up-to-date results.


Do you have a source on the Netflix number? Not doubting, just curious...

Also, to nitpick, these are not transactions; these are "operations" or some other word that doesn't imply what the word "transactions" implies.


>> I am baffled as to why you'd build your castle atop a crumbling foundation

I think you're using a wrong metaphor. Facebook foundations can't be crumbling just because they're made in PHP. There wouldn't be Facebook as we know it today otherwise. You might say they used a "low quality" material to build them. I see Hack more as a better material, that can also bind with the previous one and make it stronger.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: