Hacker Newsnew | past | comments | ask | show | jobs | submit | Dwedit's commentslogin

7zip.com has never been the official website of the project. It's been 7-zip.org

How can the average 7zip user know which one it is?

Search results can be gamed by SEO, there were also cases of malware developers buying ads so links to the malware download show up above legitimate ones. Wikipedia works only for projects prominent enough to have a Wikipedia page.

What are the other mechanisms for finding out the official website of a software?


There is normally a wiki page for every popular program which normally contains an official site URL. That's how I remember where to actually get PuTTY. Wiki can potentially be abused if it's a lesser known software, but, in general, it's a good indicator of legitimacy.

So wikipedia is now part of the supply chain (informally) which means there is another set of people who will try to hijack Wikipedia, as if we didn't had enough, just great.

You can corroborate multiple trusted sources, especially those with histories. You can check the edit history of the Wikipedia article. Also, if you search "7zip" on HN, the second result with loads of votes and comments is 7-zip.org. Another is searching the Archlinux package repos; you can check the git history of the package build files to see where it's gotten the source from.

And we're really going to do all the brouhaha for a single dl of an alternative compressor ? And then multiple that work as a best practice for every single interaction on the Internet? No we're not.

My point was more along the lines of "there's no need to complain about Wikipedia being hijackable, there are other options", and now you're complaining about having too many options...

You don't need to do everything. They're options. Use your own judgment.


The dl for some programs are often on some subdomain page with like 2 lines of text and 10 dl links for binaries, even for official programs. Its so hard to know whether they are legit or not.

Not exactly news, wiki's been used for misinformation quite extensively from what I recall. You can't always be 100% sure with any online source of information, but at least you know there is an extensive community that'll notice if something's fishy rather sooner than later.

I was always impressed by how fast wikipedia editors revert that kind of stuff, so I think it's great advice actually!

What's your solution? If you search google for 7-zip the official website is the first hit.

Avoid downloading stuff of internet and avoid search engines.

In a post AI world asking how not be scammed is hard cause now everything can be faked.

Trust what you definitely know but still verify.

Especially in the next 5-10 years that's going to become the reality so I guess sit tight and prepare for the waves and sunamis of scams.


How would you ensure that the "average user" actually gets to the page he expects to get to?

There are risks in everything you do. If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?


> How would you ensure that the "average user" actually gets to the page he expects to get to?

I think you practically can't and that's the problem.

TLS doesn't help with figuring out which page is the real one, EV certs never really caught on and most financial incentives make such mechanisms unviable. Same for additional sources of information like Wikipedia, since that just shifts the burden of combatting misinformation on the editors there and not every project matters enought to have a page. You could use an OS with a package manager, but not all software is packaged like that and that doesn't immediately make it immune to takeovers or bad actors.

An unreasonable take would be:

> A set of government run repositories and mirrors under a new TLD which is not allowed for anything other than hosting software packages, similar to how .gov ones already owrk - be it through package manager repositories or websites. Only source can be submitted by developers, who also need their ID verified and need to sign every release, it then gets reviewed by the employees and is only published after automated checks as well. Anyone who tries funny business, goes to jail. The unfortunate side effect is that you now live in a dystopia and go to jail anyways.

A more reasonable take would be that it's not something you can solve easily.

> If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?

People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.


Open source software will have a code repo with active development happening on it. That repo will usually link to official Web page and download places.

Not universal true. Open source just means that the code is avaiable, not that developement happens in the open. (But 7zip does have a github repo)

1. Go to the wikipedia article on 7-Zip

2. Go the listed homepage


> How can the average 7zip user know which one it is?

I dunno, if you type "download 7zip" into Google, the top result is the official website.

Also, 7zip.com is nowhere on the first page, and the most common browsers show you explicitly it's a phishing website.

This is actually a pretty good case of the regular user being pretty safe from downloading malware.


I feel I need to clarify my earlier comment. I was asking how can a user tell, in general, what is the legitimate website of a software, not just how to know what 7zip.com is malicious.

Are the search removals and phishing warnings reactive or proactive? Because if it is the former then we don't really know how many users are already affected before security researchers got notified and took action.

Also, 7zip is not the only software to be affected by similar domain squatting "attacks." If you search for PuTTY, the unofficial putty.org website will be very high on the list (top place when I googled "download putty.") While it is not serving malware, yet, the fact that the more legitimate sounding domain is not controlled by the original author does leave the door open for future attacks.


One way is to consult the same source(s) where the user learned about the software in the first place.

> I dunno, if you type "download 7zip" into Google, the top result is the official website.

Until someone puts an ad above it.


Sure, but the answer to "How can the average 7zip user know which one it is?" would then be "do a Google search and use uBlock Origin".

How does the user know they are using the official uBlock Origin?

The Mozilla extension store doesn't have ads, so it's the top item. It has clear download counts and a "recommended" icon.

So the advice is to install it from the extension store.


> Also, 7zip.com is nowhere on the first page

In incognito window, for me, it's 3rd result


It's possible, although I can't replicate this result anymore.

On google search I don't see it on the first page, and the only sketchy link on page 2 is https://7zip.dev/en/download/.

Bing is worse, since it shows 7zip.com on the 2nd page, but the site refuses to load.

But I am using Thorium with manifest v2 ublock and Edge with medium setting for tracker/ad block.


open About in the app?

Does anyone know the history of auto-tiling in games? I know that Dragon Quest II (1987) had this feature for water tiles on the overworld, before it got backported to the North American version of Dragon Warrior 1.

It would be fun to know what the oldest autotiles game is. DigDug from 1982 had them I think. diggerfrom 1980 might have used them.

There has to be some ancient ascii game that uses them. I'm sure they go back further than 1980.

Edit: now that I look at Digger & digdug I'm not sure either of those used autotiles. But I do think you'll find some games that used them in the very early 80s.


Ok, I'm pretty sure

3D Monster Maze (1981)

used an autotile system -albiet a very very different sort, that made 3d-looking tiles appear in the correct place. Is that a proper autotiler? I don't know, but the principle is pretty much the same.

Anyway I'll bet there are old games still that are out there with auto-tiles.


I remember making custom Warcraft II levels, and you could change the construction time for buildings. If you picked a construction time of zero, the building would be built very quickly, but be damaged. There's something hilarious about asking a peasant to build a farm, then seeing a burning farm and hearing the "Job's Done!"

Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)

No, defer is always optional, which makes it highly error prone.

There's errdefer, which only defers if there was an error, but presumably you meant what you wrote, and not that.

BTW, D was the first language to have defer, invented by Andrei Alexandrescu who urged Walter Bright to add it to D 2.0 ... in D it's spelled scope(exit) = defer, scope(failure) = errdefer, and scope(success) which is only run if no error.


There were the TI calculators produced through the 2000s, still powered by Z80, just not a "computer".

That's the first thing mentioned in the video.

If you combine "GBA Mus Ripper" and "SoundFont MIDI Player", you can get some seriously excellent sound for listening to GBA music.

"GBA Mus Ripper" detects the so-called "Sappy" music driver and extracts and converts the songs to MIDI files, and generates a SF2 soundbank file. Available at https://www.romhacking.net/utilities/881/

"SoundFont MIDI Player" plays back MIDI files. You can configure it to automatically load a SF2 soundbank file in the directory. When you load a converted GBA MIDI file, you get the high music quality of a modern feature-packed MIDI playback engine. Available at https://falcosoft.hu/softwares.html#midiplayer

It's not perfect though, as GBA games do not use true standard MIDIs. Some MIDI controller commands (like modulator wheel) don't translate correctly.


Thanks for this, I was not aware that a good portion of GBA songs can be exported as MIDI. But I'm guessing that with good soundfonts you can get pretty reasonable quality for many of them!

They don't use general midi standard instruments. You need the extracted soundfont because the instrument numbers are unique to each game. In order to "improve" the soundfont, you need to edit that soundfont to have higher quality instruments, you can't just switch out the whole soundfont for a different one.

What's the quality of the generated code like? Does it use explicit stack frames and all local variables live there? Does it move loop-invariant operations out of a loop? Does it store variables in registers?

I haven't actually tested this, but aren't the input and output handles exposed on /proc/? What's stopping another process from seeing everything?

not a Linux expert, but I believe that at the very least it's time sensitive: after consumer process reads it, it's gone from the pipe. Unlike env vars and cli argument that stay there.

Yes pipes are exposed /proc/$pid/fd/$thePipeFd with user permissions [0].

Additionally command line parameters are always readable /proc/$YOUR_PROCESS_PID/cmdline [1]

There are workarounds but it's fragile. You may accept the risks and in that case it can work for you but I wouldn't recommend it for "general security". Seems it wouldn't be considered secure if everyone did it this way, therefore is it security through obscurity?

[0] https://unix.stackexchange.com/questions/156859/is-the-data-...

[1] https://stackoverflow.com/questions/3830823/hiding-secret-fr...


I guess the kernel is stopping that. I don't think permission wise you'd have the privileges to read someone else's stdin/out.

At least it's not a total bizarro unit like "Floppy Disk Megabyte", equal to 1024000 bytes.

Can someone explain why SystemD was controversial in the first place? I just saw faster boot times when distros adopted it.

Scope creep, violation of the Unix philosophy, large attack surface area, second system syndrome, interoperability concerns. It didn’t help that the creator’s other well-known project, PulseAudio, was also controversial at the time, for similar reasons.

PulseAudio, when it came out, was utterly broken. It was clearly written by someone with little experience in low-latency audio, and it was as if the only use case was bluetooth music streaming and nothing else. Systemd being from the same author made me heavily averse to it.

However, unlike PulseAudio, I've encountered few problems with systemd technically. I certainly dislike the scope creep and appreciate there are ideological differences and portability problems, but at least it works.


Wrote about present day reasons to dislike systemd a few days ago on HN, which encompasses most arguments of actual substance[0] (tldr: Unix philosophy, it homogenizes distros and it may be too heavy for some low-resource environments).

Historic reasons mostly come down to systemds developers being abrasive jerks to people. Systemd has some weird behavior choices that only really make sense from the perspective where every computer is a desktop; ie. it terminates all processes spawned by a user when logging out unless they were made in a specific way with systemd-run. This makes sense on a desktop - users log out, you want everything they did to be cleaned up. On a server it makes less sense, since you probably want a tmux/screen session to keep running when you sign out of your ssh session (either by choice as a monitoring tool, or alternatively because you have an unstable connection and need a persistent shell).

Every downstream distro got surprised by this change[1] and nowadays just ships a default configuration that turns it off, because upstream systemd developers weren't interested in hearing the complaints.

Most of these footguns have been ironed out over the years though.

There's also some really dumb arguments to dislike systemd, most of which just can be summarized as "people have an axe to grind with Lennart Poettering for some reason".

[0]: https://news.ycombinator.com/item?id=46794562

[1]: It was always available, but suddenly turned on by default in an update.


To over-simplify, it's about conflicting philosophical alignment:

- systemd: integration and features at the cost of complexity and scope

- traditional: simplicity and composability at the cost of boot speed and convenience

systemd conflicts against the more traditional unix philosophies as well as minimalism.


systemd also replaces some pre-existing services with its own reimplementations that are worse. The systemd developers aren't e.g. DNS or NTP experts, but they act like it and the results reflect all that.

You are correct. My comment was over simplified deliberately. I felt integration and scope accounted for the details in a TL;DR one slide answer.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: