Hacker Newsnew | past | comments | ask | show | jobs | submit | rankam's commentslogin

Yes - experiencing issues with the site and github actions for the past ~10 minutes


To me, it sounds like you need professional experience on your resume so that should be your goal. However, professional experience != a full time software engineer role. Can you find something really small that pays from a freelance site? Maybe it's just a python script that takes 4 hours and pays $10 - but with that you are a professional software engineer. Do you anyone who owns a website for a business? Ask them if you can do some really basic work for $1 - because if you do that, you're a professional software engineer.

Once you have some professional experience on your resume, it should get a little easier - it's still going to take some time and grit, but it should work out.


I have access to the model via the web client and it does show the thought process along the way. It shows a little icon that says things like "Examining parser logic", "Understanding data structures"...

However, once the answer is complete the chain of thought is lost


It's still there.

Where it says "Thought for 20 seconds" - you can click the Chevron to expand it and see what I guess is the entire chain of thought.


Per OpenAI, it's a summary of the chain of thought, not the actual chain of thought.


I do - I now have a "More models" option where I can select 01-preview


I can see it too, I am on the Plus plan and don't think I have any special developer privileges. Selecting that option for me changes the URL to https://chatgpt.com/?model=o1-preview

I tried a fake Monty Hall problem, where the presenter opens a door before the participant picks and is then offered to switch doors, so the probability remains 50% for each door. Previous models have consistently gotten this wrong, because of how many times they've seen the Monty Hall written where switching doors improves their chance of winning the prize. The chain-of-thought reasoning figured out this modification and after analyzing the conditional probabilities confidently stated: "Answer: It doesn't matter; switching or staying yields the same chance—the participant need not switch doors." Good job.


Does this mean that, theoretically, this could lead to the ability to build MacOS apps in higher languages that interoperate well with C such as Python? I know you can build MacOS apps with Python now, but does this potentially improve the experience?


You can already do this in the traditional way by building an ObjC shim which exposes a C API. The solution shown here just skips ObjC and talks directly to the ObjC runtime (which has a C API but is not as convenient to use as doing the same thing in ObjC or Swift).

In a highly simplified way you can think of Objective-C as preprocessor which replaces the ObjC syntax sugar with C function calls into the ObjC runtime (it's not how it works in reality, but how it could work).


That’s essentially what this project does. It creates the C code that the ObjC compiler would generate to “implement methods” or “send messages”.

It’s somewhat doable by hand because Objc is a thin lawyer.

Over 15 years ago I did stuff similar to this project to call some Objc code from a C++ app. Most of it was exposed to normal C APIs but one feature only available in AppKit. It was much simpler to do it this way than figure out how to make GCC or Objc like our C++ or any mess with bridging headers.

I think the move to Swift has made that harder in some ways.

But then again I don’t want to write C or C++ these days if I can avoid it.


In fact, early objective-c was a preprocessor according to Wikipedia!


That is what objC scripting brige is for.

https://developer.apple.com/documentation/scriptingbridge


I believe RubyMotion does basically this:

http://www.rubymotion.com

It was fun building an app in this a few years ago, but was difficult to keep up with updates to MacOS breaking my code.


There’s already PyCocoa and pretty sure *Cocoa exists for a variety of languages.


You should add screenshots/gifs so that people can see what it does


good point! will do now


What is the use case for a C++ web framework? Would higher level languages/frameworks make use of this to do some of the heavy lifting? Would a company migrate to something like this once at scale? Or, are there just some people/companies who prefer to use C++?


I currently have an architecture where I have a whole bunch of C++ code doing computational stuff. Later (years after writing the core computational library in C++) I was asked to expose the library via a REST API. At the time I was pretty young (didn't and still don't have the necessary experience to write a web framework myself) and none of the existing C++ web frameworks seemed to work very well. So, I now have a complex architecture with a Java Spring front end that communicates to a C++ backend. Would have been a whole lot simpler all around if I could have easily used C++ for the whole thing.


Writing wrappers to interface c++ libraries with most common scripting languages is (usually) pretty easy. Probably much more so than trying to roll an entire HTTP server in C++.


If you have to roll your own, of course it is, hence why I didn't try... but if there was a mature, easy to integrate HTTP server library I would prefer to keep everything in C++ (or better yet rewrite it all in Rust, but that's a whole other can of worms).


At a previous company most of the business is run on C++ applications that expose web APIs (SOAP/REST), is that web enough for you?

C++ is used because there's a lot of heavy lifting in message handling, transformation, and latency has a big impact on business. Any "glue" layer between a computing backend and a presentation backend adds expensive milliseconds to provide a response. If you just want to handle concurrency without care for much else, your usual dynamically typed language will do, and you solve your problems scaling horizontally.

But latency is something that can't be solved scaling horizontally.

Java or Python are used for less critical stuff (admin/management panels and such), but anything transactional is 99% of the time C++.

That said, the framework had terrible interfaces and it implied linking to a multi-GiB .so library, but in the end relatively few servers could handle 50k (relatively complex) tps with less than 20ms latency. I can barely get 20ms with a hello world django app on my laptop.


> Java or Python are used for less critical stuff (admin/management panels and such), but anything transactional is 99% of the time C++

No.


I take it you are familiar in detail with how Uberphallus' previous employer operates and can give us the correct details instead?


I took it as a general statement, not just about his employer. Maybe I misread.


This is going to become more and more common for media intensive applications.

https://vo.codes uses a Rust HTTP web framework. It's not C++, but it's the exact same use case. High performance compute.

It's not unheard of for ordinary CRUD websites, either. IIRC, the first version of OkCupid was written in C++.


How much more performant is C++ or Rust?

Didn’t Instagram manage to scale Python?

For small teams, wouldn’t the performance of Rust be offset by the productivity lost by not using Python, for example?

Assuming expert programmers in each language.


A lot of Python is coordinating other libraries written in native code, so that part is probably about the same. However, take parsing JSON, which is often used for web RPC, Python is orders of magnitude slower in many cases. In one benchmark, parsing a 100MB array classes with floats, the time within the program after loading the string took around 12s for Python and 0.9s in C++. That is probably extreme, but it negated startup/warmup time and most aspects other than parsing/reifying the data.

But then when you look at memory overhead and scale out, the number of servers/instances needed is both more and for a longer time. This has a cost.

The difference is, getting developers proficient in Python is much easier, and if you cannot get people to make your things, it's kind of a problem. If the majority of the program is inside the library, the performance difference starts to go away in many cases.


I mainly develop serves in C++ but I've used Python. It is my private opinion but when we are talking reasonable size project I do not feel I am loosing any productivity by using C++. Rather the other way around.


I work in a domain where we do tons of computational work in C++. With everyhting being webified now, a C++ backend for a web app would actually be exactly what I need to expose data via rest for a modern frontend.


If you're writing a C++ application, it can be quite useful. I've shared this before:

I worked at a company once that had a really decent HTTP server library... That they put in every program.

You'd launch an app, and to debug it, you'd access http://localhost:9001. From there, you could go to URLs for different libraries in the app. Like, if you had a compression library, you could go http://localhost:9001/compression. It would show stats about the recent work it had done, how long it took, how much CPU, RAM, disk it used. The state of variables now, etc. You could click a button to get it to dump its cache, etc.

If you were running a service on a remote machine, accessing it over HTTP to control it was just awesome. http://r2d2:9001/restart. http://r2d2:9001/quit. http://r2d2:9001/logfile.

Oh, and the services running on that remote machine would register with a system-level monitor. So, if you went to http://r2d2/services, you could see a list of links to connect to all of the running services.

...and every service registered with a global monitor for that service. So, if you knew a Potato process was running somewhere, but you weren't sure which machine it was on, you could find it by going to http://globalmonitor/Potato, and you'd see a list of machines it was running on.

Just all kinds of awesomeness were possible. Can not recommend enough.

And, I mean like, programs with a GUI. Like, picture a game. Except on my second monitor, I had Chrome open, talking to the game's engine. I could use things like WebSockets to stream data to the browser. Like, every time the game engine rendered a shot, I could update it (VNC-style) in the browser window. Except annotated with stats, etc. It was just the most useful way to organize different debug information.

And what was great was that writing a library, and wanting to output information, you wouldn't write it to std out... You'd make HTML content, and write to it. Want to update it? Clear the buffer and write to it again. As a user, if you ever want to read the buffer, you just browse it. Want to update it? Refresh the window. Or better yet, stream it over a websocket. Like Std Out on steroids. If you need to combine the output from a few libraries in a new window, you just write a bit more HTML in your code, and you're doin' it.

It's just another example, in my mind, of the power of libraries. We all get used to thinking of frameworks (IIS, Apache) as the only way to solve a problem, that we forget to even think about putting things together in new and unexpected ways. HTTP as a library - HELL YES.

Using HTML to debug programs, live, is highly under-utilized.


I would say that debugging a program live is highly underrated. There is no inherent reason why building a webserver into everything and using html for visualization is the best way to go. Shared memory can be used and be 100-1000 times faster, which can make a huge difference if you want to visualize large amounts of data like point clouds, pixels, etc. as they change.

In your scenario it sounds like an easy way to make a big step forward, though I don't think that takes it far enough.


Well, for one thing, you're presuming the person who wants to inspect something a) has a debugger, b) knows how to use it, c) that the machine running the application allows access through those ports.


I didn't presume any of those things and I'm not sure where any of those ideas come from. A program can write to shared memory that is read by a separate program. That doesn't take a debugger and it doesn't take 'ports'. I'm actually not sure what you mean by ports.


Sorry, I feel like we're really mis-communicating.

I proposed an idea, and you proceeded to tell me "There is no inherent reason why [my idea] is the best way to go." I think that's a pretty rough way to respond to someone suggesting an idea that might be useful under some circumstances. It's a possibly new tool in your tool-belt. I didn't mean to suggest that a hammer "is the best way to go" for any job. But sometimes the problem is a nail, and a hammer is the best tool.

You highlighted the value of "debugging a program live," which is true, but I thought you meant using a sophisticated debugger (gdb / Visual Studio, etc). For instance, Microsoft Visual Studio has (or used to have) the ability to have plug-ins to help you visualize complex structures in memory of the running process, for instance showing a Bitmap as an actual image on your screen. I thought that's what you meant, or just using gdb or some other sophisticated debugger. But yes, that was my misunderstanding.

Shared memory is nice, and I've used it before, but it required me to store all of my interesting objects in shared memory structures. Which meant changing how my application was written. Yes, sometimes I could see that being amazing. Or I suppose you make a large copy of them.

I've also done the trick where, if I hit a URL on my app, then it took the point clouds, pixels, etc, and provided a rendered PNG image of them. But it could do it without requiring me to restructure my program's memory layout (other than a few multi-threading locks, when needed.) And I've done that over WebSockets, with interactive rendering rates, basically like a VNC. And what was nice about that, again, was that I could run a process on one machine, and debug it (through HTML) from another. And grant that ability to other users, without having to tell them how to install a debugger app, and giving that app access to connect over the network, and etc.

The ports thing was in reference to the ability of Microsoft Visual Studio (and presumably other debuggers) to do remote debugging, attaching themselves to a process running on another computer. So, you have to be able to connect to that process. Whereas my proposal could work on normal HTTP / HTTPS ports, and with authentication, could enable a user to access debug views.

Anyway, yes, I very much agree with you that debugging a program live is highly underrated, and I think it's good if people share different ways to do that.


Very neat! So every "public" API had a corresponding URL which could be publicly accessed over HTTP ?. Sounds like a HTTP-based overlay network within the domain of your application(s).


I use C++ to write business servers that along with managing data do some heavy calculations and reporting. I see no problems at all using C++ (in combination with some libraries) for this purpose. They're blindingly fast and memory efficient. Using modern C++ they're not any more difficult to develop then using "traditional" means.


Pretty much the same as every other language - if you're working in X, and you have a team of X developers, you'll want to use a framework that's also in X so that your toolset doesn't spiral out of control.

In C++ you could link-in a golang webserver, with cgo, or use Spring/Vertx using JNI, but you probably won't.


provide a web api for existing C++ applications and libraries


I'm not sure there is much of a use-case. Most enterprises want to focus on developer productivity. Most modern web frameworks are more than fast enough, whether you use Spring, Kestrel, or ASP.net, it doesn't really matter anymore, even Flask is more than good enough for 99% of use-cases.

Given how capable even cheap cloud hardware is now, optimizing for developer productivity is better than choosing the "very fast" C or Rust framework which no one understands how to program for. Also, C/Rust etc, being harder languages to learn and lower level, tend to be both harder to hire for and also end up with more bugs than the higher level languages.


> Most enterprises want to focus on developer productivity

And that is something that I generally am OK with, to a point. If your application isn’t performance sensitive, have at ‘er and use whatever you feel is going to be the most productive, even if it potentially has higher opex.

Bugs is an interesting thing. There’s two flavours of bugs, basically: requirement errors and implementation errors. High vs low level languages don’t do much to alleviate the first category of bugs at all (e.g. not thinking through how “random shuffle” should work in a music player). High level languages can help reduce the number of coding errors compared too lower level languages, but they do have their own quirks (eg using an empty list as a default argument in Python)

Ultimately, though, it’s all about working within the constraints you’re given, whether time-to-market, development costs, opex costs, system performance, etc. I’m intrigued by this framework because I have a high performance image processing system written in C++17 and there’s been some discussion recently about an HTTP API for it. Why is it in C++ to begin with? It’s on a somewhat resource constrained board (Jetson Nano) and has to run at at least 70fps for what it’s doing. 14 milliseconds to grab the image from the camera, run it through TensorRT, and do something useful with the output. I would have considered Rust as well, but a good chunk of this is leveraging existing C++ libraries (OpenCV, FLIR Spinnaker, TensorRT, etc) and fighting with wrappers and weird impedance mismatches is not my idea of a good time.


>Most enterprises want to focus on developer productivity

While I absolutely agree, it all depends on a scale. If you're using for example 10 cloud instances, x2 performance improvement will not justify extra person-months spent on developing more performant solution. On the other hand if you run 1000000 servers even 1% performance boost might be worth that effort.


There's a additional point: on 4 cloud instances, a x5 performance improvement means you have only one machine to manage instead of several, and then your ops costs are way smaller.

There's a big number of companies who have between 4 and 20 machines for serving Python, Node or PHP apps and spend a lot of efforts managing them with overkill solutions like kubernetes or whatever when it could be a single machine or two using more frugal languages.


Ok I feel like an idiot, but where is the code in the repo that produces the GUI - ie where is the Suitcase code?


EDIT: okay, I'm the idiot for not actually opening the source archive and verifying that there's anything in there worth looking at.

You aren't the idiot in this equation. Inexplicably, the source can be found in zip or tar format on the download page:

https://github.com/Impedimenta/Suitcase/releases/tag/1.0.0-a...


The "source code" zip has the same files as in the repo. No source.


Holy hell, the source archive is huge.

I just saw how that the demo is slow to boot and wanted to see the horror of the code.

Ok no, there is no code in the archive. Only images.


I guess it isn't open source ¯\_(ツ)_/¯ Strange how GitHub has apparently become the default distribution platform


Yes, you're an idiot for thinking a project on GitHub is open source.

What a lame use of GitHub. Can I get the minute back I wasted reading that readme?


Has the R core team publicly stated that they disagree with direction Wickham's Tidyverse is going? Genuinely asking as I love the Tidyverse, but would be interested to hear arguments against it.


I am not sure why the original commenter said that tidyverse is more backwards-compatible compared with R core. It used to introduce breaking changes every 6 months or so.

Also they like it this way and promote it: https://twitter.com/hadleywickham/status/1175388442802479104


They still do.

I like the tidyverse, but Hadley's struggles with lazy evaluation and arguments has cost me lots and lots of time updating internal code at various workplaces.

Don't get me wrong, the tidyverse is great, but if I was writing R code that I expected to run without supervision for a long time, I'd avoid it as much as possible.


There are more then enough ways to keep it going. This has been address by several tools.

Personally I have some tiddy code that is 8 years old and it still is working.


Yes, and I have plenty of it which does not.


If you are looking for a faster, more concise alternative I highly recommend data.table.


perl is also very concise


Moving up to a major version (1.0) implies there could have been breaking changes if you follow semver. And including something like `unnest_legacy()` is helpful for people making the transition.

Just like `stringsAsFactors=FALSE` happened on a transition from 3.x to 4.x, because it is breaking, dplyr 1.0 had breaking changes.


Hadley is a member of the R foundation for what it's worth: https://www.r-project.org/foundation/members.html


R core and R foundation are different things: https://www.r-project.org/contributors.html


Or those places aren't testing? The UK has ~62,000 confirmed cases and ~7,100 deaths - that implies a > 10% mortality rate which is definitely not true. Why is the mortality rate so high? Because they only test those with severe symptoms - the actual number of cases is much higher than 62,000.


To be fair, the mortality rate could be a lot higher than estimated. That has huge error bars too. We all look on the bright side (why it might be lower). There's also a dark side.


Based on what? Nothing points to a higher mortality rate - but some countries are admittedly not testing a lot. Even if you quadruple the estimated mortality rate, the UK numbers would still be under-represented.


The virus takes a while to run its course. We don't know how many people die in the end. There's a huge number of people who have neither recovered not died yet. A lot of people focus on Diamond Princess, which was at <1% fatality late March. It's already up to 1.6% (the 12th passenger just died). 82 are still sick, of whom 9 are in severe condition. That's one of the first set of cases, so that's a lower bound with perfect medical care. Those 9 who are still in sever condition will almost certainly have permanent lung scarring. I'm not sure the status on the 82, but they've sick a long, long time -- we're now something like two months in.

Lower bound: Deaths/(total cases)

Upper bound: Deaths/(deaths + recovered)

Some media in France tend to focus on the upper bound, and reports numbers in the 10-30% range, so the country is very open to very severe measures. The US tends to be optimistic, and everyone plays it down, so the virus is spreading like wildfire.


On the other hand, I don't think "perfect medical care" is an accurate description of being quarantined on a boat with limited diagnostics and an illness your doctors have never seen before, and I also think the cruisegoing cohort skews much, much older than a general population.


Of course. As I said, you can try to paint a pretty picture, or an ugly picture. We don't know mortality rate very well yet (or much of anything else). Ever estimate has huge error bars in both directions, and the scientific estimates are sort of a median.

People in the US tend to look for every silver lining, preliminary study showing a potential breakthrough, and any reason it might not be so bad. That's a bias which has led to this not being taken seriously enough at any point.

I kind of treat the WHO estimates (1% with ventilator, 6% without, and 3.5% median) as just that: best-available estimates. It might be much, much worse. It might be much, much better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: