Hacker Newsnew | past | comments | ask | show | jobs | submit | przmk's commentslogin

I'm biased but when I see Discord as the only way of communication, it doesn't make for a serious project. I wish more projects would rely on IRC/Matrix + forums.


For better or worse, plenty of serious projects are using Discord for communication. It's not great, but IRC and Matrix have their own problems (IMO Zulip is the best of the bunch, but doesn't seem to be particularly widely adopted).


You're right. Just look at their branding. Such poor taste. I'm sure it's a scam.


I would venture to say that there is little overlap between X11 users and people with high-DPI screens.


We are under an article which tells you that you can have problems with Wayland and hiDPI screens. And for example I’m one of those people, who uses X11, because Wayland failed on many levels, like buggy video playing, crashing while changing monitors, or simply waking up my laptop with an external monitor, and I didn’t give more than a few days to fix these (cheers to the author to try this long), so I went back to X11. Which is still buggy, but on a “you can live with it level” buggy.

Btw, everybody who I know, and I too, changes the font size, and leaving the DPI scaling on 100%, or maybe 200% on X11.


> Btw, everybody who I know, and I too, changes the font size, and leaving the DPI scaling on 100%, or maybe 200% on X11.

Doesn't work if your screens are too different (e.g. 4k laptop screen and 32" desktop monitor).


You can scale down from a higher resolution to make the UI perceptively the same size. You can do this with xrandr --scale OR for example the GUI in Cinnamon on Mint after you check "fractional scaling" under X mind you.


It does work for Qt and KDE at least.


I have a setup with a high DPI monitor mixed with a normal DPI monitor and KDE over Wayland just works fine. The only issue that I found are with Libre Office doing weird over scaling and Chrome/Chromium window resizing his window to the oblivion.


I’ve been using X11 with high-DPI screens since 2013, but with integer scaling (200% or 300%), never fractional scaling.


Nobody's going to buy monitors where they need fractional scaling or multiple monitors with mixed DPI if they know it's broken.


Everyone’s so excited about the wave if windows users coming to Linux. Those people already have monitors.

I switched in 2018 and was surprised I couldn’t use fractional scaling on one monitor like I’d been doing for years on windows.


Not to mention that fractional scaling is practically required in order to use the majority of higher DPI monitors on the market today. Manufacturers have settled on 4K at 27" or 32" as the new standard, which lends itself to running at around 150% scale, so to avoid fractional scaling you either need to give up on high DPI or pay at least twice as much for a niche 5K monitor which only does 60hz.


Fractional scaling is a really bad solution. The correct way to fix this is to have the dpi aware applications and toolkits. This does in fact work and I have ran xfce under xorg for years now on hi-dpi screens just by setting a custom dpi and using a hi-dpi aware theme. When the goal is to have perfect output why do people suddenly want to jump to stretching images?


The overwhelming majority of the low-DPI external displays at this point are 24-27 1080p

Most high-DPI displays are simply the same thing with exactly twice the density.

We settled on putting exactly twice as many pixels in the same panels because it facilitates integer scaling


That doesn't gel with my experience, 1080p was the de-facto resolution for 24" monitors but 27" monitors were nearly always 1440p, and switching from 27" 1440p to 27" 4K requires a fractional 150% scale to maintain the same effective area.

To maintain a clean 200% scale you need a 27" 5K panel instead, which do exist but are vastly more expensive than 4K ones and perform worse in aspects other than pixel density, so they're not very popular.


Why not give up on high DPI?

Save money on the monitor, save money on the gpu (because it's pushing fewer pixels, you don't need as much oomph), save frustration with software.


4K monitors aren't a significant expense at this point, and text rendering is a lot nicer at 150% scale. The GPU load can be a concern if you're gaming but most newer games have upscalers which decouple the render resolution from the display resolution anyway.


I used to be like this. I actually ran a 14" FHD laptop with a 24" 4k monitor, both at 100%. Using i3 and not caring about most interface chrome was great, it was enough for me to zoom the text on the 4k one. But then we got 27" 5k screens at work, and that had me move to wayland since 100% on that was ridiculously small.


Why not 200% and increase font size slightly in all 3 cases?


Because although I don't care much about the chrome, I sometimes have to use it. For example, the address bar in firefox is ridiculously small. Also, some apps, like firefox (again) have a weird adaptation of the scroll to the zoom. So if you zoom at 300%, it will scroll by a lot at a time, whereas 200% is still usable.

Also, 200% on an FHD 14" laptop means 960x540 px equivalent. That's too big to the point of rendering the laptop unusable. Also, X11 doesn't support switching DPI on the fly AFAIK, and I don't want to restart my session whenever I plug or unplug the external monitor, which happens multiple times a day when I'm at the office.


14 fhd is 157 ppi 24 4k is 184 ppi

This really isn't this far off. If we imagined the screens overlayed semi-transparently an 16 pixel letter would be over a 14 pixel one.

If one imagines an ideal font size for a given user's preference for physical height of letterform one one could imagine a idealized size of 12 on another and 14 on the other and setting it to 13 and being extremely close to ideal.

>So if you zoom at 300%, it will scroll by a lot at a time, whereas 200% is still usable.

This is because it's scrolling a fixed number of lines which occupy more space at 300% zoom notably this applies pretty much only to people running high DPI screens at 100% because if one zoomed to 300% otherwise the letter T would be the size of the last joint on your thumb and legally blind folks could read it. It doesn't apply to setting the scale factor to 200% nor the setting for Firefox's internal scale factor which is independent from the desktop supports fractional scaling in 0.05 steps and can be configured in about:config

layout.css.devPixelsPerPx


Right, and 27" 5k is 218 ppi, which isn't that much more than the 24". But don't forget that viewing distance plays a big role in this, and my 14" laptop is much closer than a 27" monitor. Bonus points for our specific model having an absolutely ridiculous viewing angle, so if it's too close the outer border are noticeably dark.


Why to have a home if you can sleep in a cardboard box?


That's a really odd thing to say.

I don't really care about this but here's an example:

I have 2 27" screens, usually connected to a windows box, but while working they're connected to a MBP.

Before the MBP they were connected to several ThinkPads where I don't remember what screen size or scaling, I don't even remember if I used X11 or Wayland. But the next ThinkPad that will be connected will probably be HiDPI and with Wayland. What will happen without buying a monitor? No one knows.


There is no particular reason for this theory to be true. X supports high DPI screens well and has for ages.


Fractional scaling is very common with high dpi screens. I don't I'd be able to have a 175% scaling on my 14" 3k screen with X11.


Maybe it supports it sure, the problem is that it doesn't work at all.


It does work and has worked for over a decade. You can configure scaling under settings in Cinnamon or plasma for instance or via environmental variables in a simple environment like i3wm.

The post is from the Dev of i3wm an x11 window manager complaining among other things about how well his 8k monitor works under x11 and how poorly it works under Wayland.

You can also consult the arch wiki article on high DPI which is broadly applicable beyond arch


Yes, I know all that. Except it doesn't work. At all.

Ten years ago there were cursor clipping issues, cursor coordinates issues and crashes and I've been home-baking patches for that.

Also it was impossible for one X session to span across two GPUs. Dunno if that was improved.

Now it's bit better, but for sure your amdgpu will entertain you with little nice crashes when you run something heavy on a scaled display.

I'm not even talking about VRR, HDR and all that stuff.


In that time I've had Hidpi work perfectly on first on Nvidia then recently on AMD GPUs on several different distros and desktops all running on X on several distros. They all worked out of the box and were able to scale correctly once configured.

The totality of my education on the topic was reading the arch wiki on hidpi once.

AFAIK one cannot span one x session across multiple GPUs although AMD had something that it once referred to as "eyefinity" for achieving this.

It is rarely needed discreet GPU often support 3 or even 4 outputs

One may wonder if you tried this a very long time ago back when AMD sucked and Nvidia worked well in 2005-2015


My ISP refuses to give you a static IPv6 prefix unless you're a business customer, despite having an "unlimited" amount of them. This results in me not bothering to set it up properly and focusing on IPv4 still.


Do you have a static IPv4, presumably a single IP?

I find it useful, mine does change periodically, but I just have a script that Updates DNS when it changes:

   nsupdate -v -y "${KEY_ALGO}:${KEY_NAME}:${KEY_SECRET}" <<EOF
   server $DNS_SERVER
   zone $ZONE 
   update delete $RECORD AAAA
   update add $RECORD 300 AAAA $CURRENT_IP
   show
   send
   EOF
Sure some services might notice for a bit, but it's plenty good for me.


I don't have a static IPv4 address and I have to use a DDNS built into the Caddy plugin on my OPNSense router. From what I understand, you can't get a static "local" (I know, IPv6 has no direct equivalent) address to use for a reverse proxy — at least not in an easy manner. I might be completely wrong but that's why I don't bother with IPv6.


You’re looking for a Unique Local Address there. It’s a non-externally-routable address that you can use for internal connections.

https://en.wikipedia.org/wiki/Unique_local_address


Yep. ULA addresses are the equivalent of 10.0.0.0/8, 192.168.0.0/24, and 172.16.0.0/12 space. [0] And you can use them to do NAT, just like with IPv4.

The huge difference from the IPv4 world is that the procedure for generating your /48 ULA prefix ensures that it's very, very unlikely that you will get the same prefix as anyone else. So, if everyone follows the procedure, pretty much noone has to worry about colliding with anyone else's network.

Following the procedure has benefits. For example, VPN providers who want to use IPv6 NAT can do that without interfering with the LAN addressing of the host they're deployed to... companies that merge their networking infrastructure together can spend far less (or even zero) time on internal network renumbering... [1] etc, etc, etc.

[0] And link-local addresses are the equivalent of 169.254.0.0/16 space.

[1] Seriously, like a year after one BigCo merger I was subject to, IT had still not fully merged together the two company's networks, and was still in the process of relocating or decommissioning internal systems in order to deal with IPv4 address space constraints. Had they both used ULA everywhere it was possible to do so, they could have immediately gotten into the infosec compliance and cost-cutting part of the network merging, rather than still being mired in the technical and political headaches forced upon them by grossly insufficient address space.


Problem with ULA is that it's functionally useless on a dual-stack network, because clients will attempt to use IPv4 before they attempt to use ULA.

https://blog.apnic.net/2022/05/16/ula-is-broken-in-dual-stac...


> Problem with ULA is that it's functionally useless on a dual-stack network.

Nope, it works just fine. I use it for stable local addressing and LAN host AAAA records and let my ISP-delegated global prefix drift as my ISP wishes it to.

And -as it happens- the prose in that article about source address selection is incorrect.

On Linux, source address preference appears to be application-specific. For example, curl prefers IPv6 addresses, and falls back to IPv4 if the v6 connection fails. I checked just now by removing my globally-assigned IPv6 address, and capturing the traffic created by executing 'curl https://www.google.com'. I know for a fact that BIND 9 prefers non-link-local IPv6 source addresses over IPv4 addresses because until I set up my home-built router to reject Internet-bound traffic coming from my ULA, a sufficiently-long failure of the DHCPv6 server run by my ISP would cause name resolution to get very, very, very slow when the global prefix expired and BIND started using its host's ULA as a source address and my router dutifully relayed that traffic into my ISP's black hole. I'm certain that very many applications unconditionally prefer non-link-local IPv6 addresses over IPv4 ones. You might also care to pay attention to this comment and its publication date: [0]

OTOH, Firefox prefers IPv4 connections in that scenario and doesn't even attempt a v6 connection. I assume Chrome is the same way.

And, that article suggests GUA space as a replacement for ULA space:

> All of these are serious pitfalls that arise when attempting to use ULA. The simple and more elegant answer is to simply leverage GUAs.

Which... uh... no. I'd have to go through my local RIR to get an allocation, and then negotiate with my ISP to get it routed. Given that I'd have to go through ARIN because I'm in the US, and I have a boring residential account with my ISP, neither of those things will ever happen. The entire point of ULA is that no coordination with external entities is required to do network-local addressing.

Also, the documentation that that article links to to discourage people from deploying NAT66 is almost literally "It's exactly as complicated as NAT44. Why do it when you can get global IPv6 addresses?!?", which isn't a useful complaint when your intent is to exactly replicate what you get from IPv4 NAT in an IPv6 world. I agree that globally-routable addresses are better, but if your site admin demands (for whatever reason) that you not have them, then -because of the collision-avoidance property of the ULA prefix generation procedure- you're better off than with IPv4 NAT.

[0] <https://blog.apnic.net/2022/05/16/ula-is-broken-in-dual-stac...>


Note that although the policy is that you choose a random prefix, nothing actually enforces this and nothing stops you using fd00::1, fd00::2, etc just like 10.0.0.1 etc.


I technically have a dynamic IPv4 address from my ISP. I've had the same for five years now, across multiple power outages.

I also have a dynamic IPv6 prefix. That one changes at least once a week, regardless.


My ISP is xfinity. They say the same thing but my IPv6 address hasn't changed any more frequently than my IPv4. In my experience it changing isn't any more annoying than my v4 changing so I'm not sure why people still get up in arms about it.


In about a year of treating my comcast-assigned ipv6 address as static, it changed once.

Sadly, this happened despite me specifically requesting the same address as always. That caused me some grief. But it's not common.


On the other end of the connection, there are physical servers and routers. Every once in a while they change how things are connected/deployed for maintenance, upgrades, etc.


Pretty much, I have my cable modem on continuous power and it will keep the same address pretty much forever. Two times it changed is when I had a 48 hour power outage and shut everything down, and the other time was maintenance at the cable companies side where they rebooted their equipment.


My xfinity ipv4 changes once every few years, if that. I treat it as static and update things if or when it changes, which fortunately isn’t too much work. I never requested anything special regarding it, and I have a normal/non-business account. I wonder why some change often and others don’t?


I had Xfinity for 4 years and my IP changed once in that time! Now I have fiber from centurylink, and it changes anytime I need to reboot the fiber modem or my firewall. Different companies, same metro area though. That too makes me wonder about how both manage their allocations give the difference in IP assignments.


This should be illegal. Yes, in this case, I'm not saying that as a figure of speech. ISPs are a utility, and building that kind of artificial scarcity into something that is really damned near infinite is highly anti-consumer.


Get a virtual server and do the things on it that you'd want a static address for. Use a VPN connection back to your home to merge it with your network. This is a great way to deal with CGNAT.


My ISP (naming no names...erum...Spectrum) refuses to even admit they know what IPv6 is. It's like asking the NSA what Menwith Hill is for...


https://www.spectrum.net/support/internet/ipv6

https://www.spectrum.net/support/internet/ipv6-faq

> IPv6 is available today with an IPv6 capable modem in the majority of Spectrum’s footprint.


I've had v6 on spectrum for 5 years


I recently moved house and looked at a new offer from a new ISP for a long term lockin but a cheap price. They used CG-NAT. I instead chose one which gives me as many ipv4s or ipv6s as I can reasonably use, doesn't oversubscribe its upsteam connectivity etc.

For home internet service I would prefer to pay extra for a better service, it's too important to try to penny-pinch 0.1% of my income on it.

But then I live in a capitalist country where there's competition, I believe some countries you don't get a choice.


FYI it's practically impossible not to oversubscribe your upstream connectivity unless they either spend way too much money or offer very slow service to users. Consider ten thousand users with 1G connections - should they have 10 terabit upstream?

The more practical thing to look for is that they aim to upgrade it based on need, instead of arbitrarily throttling the users.


100g interconnects are very cheap, but I'm more talking about oversubscription in the ISP network -- as they have multiple peering and transit arrangements it's clear that if you have 10,000gbit worth of customers, you don't need 10tbit of connectivity for each transit provider.


Where I live the cable system is fine, and the cellular system is fine... until one goes down, then the other gets flooded with traffic and stops working leaving no internet at all.


For those in the UK who want a static IPv4 or IPv6 block AAISP offer a L2TP service for £2/month. It's limited to 3 megabit/s but might be enough for some use cases.


Same here, I had a working IPv6 setup previously with my DSL provider, but now that I moved to a fibre connection, the new one refuses to support it.


But do they give you PD?

My prefix is tied to the mac address of the device that's connected to the PON.


It's crazy how, after all this time, OpenOffice is still mentioned so often despite being kind of dead for a while.


I'd switched my father from OpenOffice to LibreOffice, but the upgrade story was painful & years ago switched him back to OpenOffice after it was given to Apache. Figured the projects would consolidate under Apache, but no

& wow checking up on it now, things have not been going well at Apache for last decade


What exactly do you mean by upgrade story? The Libre Office UI hasn't changed for a long time (although there are new UI options l'île the ribbon, etc), I'd think the switch would be pretty straightforward.


I realized when posting this reply that these experiences are from over a decade ago. This was on Windows, the self upgrade option just didn't work well (large download, wouldn't complete or something, would end up reinstalling instead)


It would have helped if you told us what you think is broken in Iced.


You're right, unfortunately, don't have a good iced-specific list, so left it out, but here a few things: startup is blazingly slow (one of the big issues of all those awful electron apps which the native frameworks can supposedly avoid), https://github.com/iced-rs/iced/issues/2455 https://github.com/iced-rs/iced/issues/615.

Or high memory use, another fatal flaw of all those electron apps https://github.com/iced-rs/iced/issues/820

Or window can't be centered (while this is basic, it's in a bit of a blame-the-broken-Wayland niche https://github.com/iced-rs/iced/issues/1287)

Or lack of accessibility https://github.com/iced-rs/iced/issues/552

Or poor keybinding support (though in the case of iced I see it's somewhat improved in this release with some IME support and non-latin https://github.com/iced-rs/iced/pull/3134, so don't know exactly what's left broken)

The there are non-runtime issues like lack of documentation


My iced apps startup much faster than the Qt(Pyside) and Electron apps they replaced, and run faster too. Similarly, memory usage is much lower, and shipping a single 25MB binary to customers is preferable to bundling some multi-GB runtime environment. So in practice I'm glad development focuses more on completing the framework before playing performance metric golf, even if minimizing startup time and memory usage would be nice.

The 'lack' of documentation was only a problem for me when I was doing a cursory comparison of UI frameworks. Once I seriously started learning iced, the resources available, including the community on discord, was plenty to start being productive on making real apps within a week or two (and I had very limited Rust experience before that).


What are the exact timings? And what OS? For a Windows example, a proper native framework would be fractions of a second, Electron would be many seconds (especially on first run), iced would be a few seconds (as also noted in the issue linked), but that's still way closer to the Electron range, not the "instant" native feel.

Runtime speed is something different, even Electron is not that bad since the UIs are mostly too primitive to cause much of a visible slowdown, here the styling/animation capabilities are usually more prominent.

> including the community on discord

that's not a good baseline to cover the basics due to variable latency and predictability.


200-300ms on Windows 11, with ~70MB of idle RAM usage.

> that's not a good baseline to cover the basics

I wasn't suggesting that everyone should join the discord and ask how to write 'Hello World'. There's an official (but WIP) book, unofficial guides, and many examples spanning basic to advanced usage for reference. An active discord is complementary to those. Having more learning resources is important for wide adoption, but that's not the priority in pre-1.0 development.

> startup much faster than the Qt(Pyside) and Electron apps they replaced, and run faster too

I forgot to mention, the iced apps have taken less time to develop more features with fewer bugs (and no segfaults).


Strange, checked the latest version of a counter app, same slow startup ~2sec as before and ~70M ram vs a native simple window app that takes 0.2s and 1M.


When I switch to compiling with just the tiny-skia software renderer, the counter app is reduced to a 4MB binary that starts in 60ms (for first paint, 16ms for window creation) and takes 7M of RAM. So if you don't need GPU acceleration that may be more to your liking.


The RAM is from the wgpu buffer. It's not 70 ram for the counter. If you add 10,000 counters, it won't increase the ram.

Doesn't take 2 secs to load here, so maybe switch to a better OS


You had the chance to provide real feedback but instead chose to link to closed GH issues from 10 versions ago.

I encourage you to try and do better next time.


First, the issue is not a closed=solved issue, if you bothered to actually read it you'd realize there is another currently still opened issue about this, Second, there are no "issues", there is only one such link

I encourage you to read better next time and not close your eyes so tight as to miss the feedback.


It doesn't boot into the desktop by default — it uses its own session with the Gamescope compositor. The desktop is easily accessible through the power menu though.


Gamescope is really nice. I am running Steam headlessly with that on my home server.


Does it work well with the Google tv remote for example? Last time I used NewPipe on the tv, the ui was completely unsuited for remotes. I can't imagine using streaming services on the browser to be any better.


As someone who lives in the terminal, I can't say I've ever had the need to do that. It's only by reading the comments that I've realised that there's no search in Ghostty.


Who in this thread is saying it's all good?


And that makes them "idiots"? Is it really necessary to resort to insults because they don't immediately support your edge case?


It certainly makes them idiots, from the perspective of artificially restricting their audience; AFAIK it's very simple to publish on OpenVSX, there are no extension changes needed. Most devs I know use VSCodium, not VSCode.

If I were their investor I would not be happy.


> Most devs I know use VSCodium, not VSCode.

And how many devs do you know? What percentage of all the devs in the world does that represent?

If you know a million devs who mostly use VSCodium instead of VSCode, that might be significant. But I'm guessing you just jump to conclusions about almost everything based on your own bubble and lack of critical thinking skills. Then you can just call other people idiots while patting yourself on the back for being so smart.


I think counter to your experience I don’t know a single developer who uses Codium. While I get the frustration, it may come soon but not immediately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: