Hacker Newsnew | past | comments | ask | show | jobs | submit | mkozlows's commentslogin

I agree with that for programming, but not for writing. The stylistic tics are obtrusive and annoying, and make for bad writing. I think I'm sympathetic to the argument this piece is making, but I couldn't make myself slog through the LinkedIn-bot prose.

No. The Apple TV _service_ does, and you can configure that service to be some kind of weird god service if you want. But you can also treat that service like any other normal service, one that only comes up if you launch it. In that case, the home screen is just a straight icon grid with no kerfuffle.

I think he's really getting at something there. I've been thinking about this a lot (in the context of trying to understand the persistent-on-HN skepticism about LLMs), and the framing I came up with[1] is top-down vs. bottom-up dev styles, aka architecting code and then filling in implementations, vs. writing code and having architecture evolve.

[1] https://www.klio.org/theory-of-llm-dev-skepticism/


I like this framing. Nice typography btw, a pleasure to read.


I mean, your employer will pay it. $1K/month is cheap for your employer.

But there is an interesting point about what it does to hobby dev. If it takes real money just to screw around for fun on your own, it's kinda like going back to the old days when you needed to have an account on a big university system to do anything with Unix.


Open source software

Small bootstrapped startups

Are more what I had in mind. Of course an established company can pay it. I don't like the idea of a world where all software is backed by big companies


I'm not too worried about startups: We used to have startups when they had to buy expensive physical servers and pay for business-class T1 connections and rent offices and all that. The idea that you can start a company with $20 and a dream is relatively new, and honestly a little bit of friction might be good.

But yeah, I share your concern about open source and hobby projects. My hope would be that you get free tiers that are aimed at hobby/non-profit/etc stuff, but who knows.


Nobody pushed you to use git when you were comfortable with svn? Nobody pushed you to use Docker when you were comfortable running bare metal? Nobody pushed you to write unit tests when you were comfortable not? Nobody pushed you to use CSS for layout when you were happy using tables?

Some of those are before your time, but: The only time you don't get pushed to use new technologies is when a) nothing is changing and the industry is stagnant, or b) you're ahead of the curve and already exploring the new technology on your own.

Otherwise, everyone always gets pushed into using the new thing when it's good.


The engineers using svn were the ones who were pushing for git - I was the one saying "we can't, because none of the conversion tools competently preserve branch history, and it's even worse on repos that started in CVS". Noone responsible for repos was pushing for git, it was end-users pulling for it (and shutting up when they learned how much work it would cause :-) That looked nothing like the drug-dealer-esque LLM push I've been seeing for the last 3 years.

(Likewise with CVS to svn: "you can rename files now? and branches aren't horrible? Great, how fast can we switch?" - no "pushing" because literally everyone could see how much better it was in very concrete cases, it was mostly just a matter of resource allocation.)

In the context of this discussion, it feels more like ipv6 :-)


Git had obvious benefits over svn.

Docker has obvious benefits over bare metal.

Etc.

My own experiences with LLMs have shown them to be entertaining, and often entertainingly wrong. I haven't been working on a project I've felt comfortable handing over to Microsoft for them to train Copilot on, and the testimonials I've seen from people who've used it are mixed enough that I don't feel like it's worth the drawbacks to take that risk.

And...sure, some people have to be pushed into using new things. Some people still like using vim to write C code, and that's fine for them. But I never saw this level of resistance to git, Docker, unit tests, or CSS.


The resistance to those things was less angry, but it was there. giveupandusetables dot com no longer exists, but you can find plenty of chin-stroking blog posts about it. It was a big argument in the late aughts!

> Nobody pushed you to use git when you were comfortable with svn?

Generally, sensible companies held off on this sort of transition until the tool was mature and stable.

Way back in the day, I was involved in an abortive move from CVS to SVN. It went great for a week, then the SVN db corrupted itself irretrievably, taking a week's work with it... I think we finally moved for real about a year later, when SVN had abandoned its extremely unreliable BDB backend that early versions used.

Forcing adoption of [AI tool of the month] now feels a bit more like, say, adopting Darcs back during the DVCS wars than adopting git after it had won.


> Otherwise, everyone always gets pushed into using the new thing when it's good.

and then there is AS/400 and all the COBOL still in use which AI doesn't want to touch.


Some people don’t like to be pushed. They want their own rhythm.

But when you stop trying new stuff (“because you don’t want to”), it is a sign that your inner child got lost. (Or you have a depression or burnout.)


Lol. Surely this depends on what the new stuff is? Looks like all nuance goes out of the window when agents are involved.

There are no reputable studies that show EVs having anything like the harms of legacy cars. The worst you can get is that if you're on a carbon-intensive grid, a Hummer EV might be as bad as a compact gas car.


I mean, welcome to literally every tech startup valuation 2021 vs now. 2021 was so amazing for stock valuations.


The part where it favorably mentioned namespaces also blew my mind. Namespaces were a constant pain point!


Namespaces are a cool idea that didn't really seem to pan out in practice.


We use Mulesoft where I work, and XML namespaces are a constant issue. We never managed to define an API spec in such a way that the RAML compiler and the APIKit validator would both accept the same payload. In the end we just had to turn off validations in APIkit.


Namespaces were fun! But mostly used for over engineering formats and interacted with by idiots who do not give a toss. Shout out to every service that would break as soon as elementtree got involved. And my idiot colleagues who work on EDI.


Namespaces give you human readable GUIDs as element names. This is important. I agree their implementation and integration is a bit inconvenient.


Nope, they were great.

Our AOLServer like clone in 2000 used them to great effect in our widget component library.


I took an XML class as it neared its heyday, and even the teacher was rolling his eyes at the inclusion of namespaces.

Amateur hour.


XML was designed as a document format, not a data structure serialization format. You're supposed to parse it into a DOM or similar format, not a bunch of strongly-typed objects. You definitely need some extra tooling if you're trying to do the latter, and yes, that's one of XSD's purposes.


that's underselling xml. xml is explicitly meant for data serialization and exchange, xsd reflects that, and it's the reason for jaxb Java xml binding tooling.

get me right: Json is superior in many aspects, xml is utterly overengineered.

but xml absolutely was _meant_ for data exchange, machine to machine.


No. That use case was grafted onto it later. You can look at the original 1998 XML 1.0 spec first edition to see what people were saying at the time: https://www.w3.org/TR/1998/REC-xml-19980210#sec-origin-goals

Here's the bullet point from that verbatim:

  The design goals for XML are:

    XML shall be straightforwardly usable over the Internet.
    XML shall support a wide variety of applications.
    XML shall be compatible with SGML.
    It shall be easy to write programs which process XML documents.
    The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
    XML documents should be human-legible and reasonably clear.
    The XML design should be prepared quickly.
    The design of XML shall be formal and concise.
    XML documents shall be easy to create.
    Terseness in XML markup is of minimal importance.
Or heck, even more concisely from the abstract: "The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML."

It's always talking about documents. It was a way to serve up marked-up documents that didn't depend on using the specific HTML tag vocabulary. Everything else happened to it later, and was a bad idea.


please bear with me...

data exchange was baked into xml from the get go, the following predate the 1.0 release and come from people involved in writing the standard:

XML, Java, and the future of the Web Jon Bosak, *Sun Microsystems* Last revised *1997.03.10*

section on Database interchange: the universal hub

https://www.ibiblio.org/bosak/xml/why/xmlapps.htm

Guidelines for using XML for Electronic Data Interchange Version 0.04

*23rd December 1997*

https://xml.coverpages.org/xml-ediGuide971223.html

the origin of the latter, the edi/xml WG, was the successor of an edi/sgml WG which had started in the early 1990, and was born out of the desire to get a "universal electronic data exchange" that would work cross platform, vms, mainframes, unix and even DOS hehe, and to leverage the successful sgml doc book interoperability.

was it niche? yes. was it starting in sgml already? and baked into xml/xsd/xslt? I think so.


to be fair

>XML shall be straightforwardly usable over the Internet.

is machine to machine communication

to me, XML is an example of worse is better, or rather, better is worse. it would never have come out of Bell Labs in the early 70s. Neither would JSON for that matter.


And as for JAXB, it was released in 2003, well into XML's decadent period. The original Java APIs for XML parsing were SAX and DOM, both of which are tag and document oriented.


This is performance art, right? The very first bullet point it starts with is extolling the merits of XSD. Even back in the day when XML was huge, XSD was widely recognized as a monstrosity and a boondoggle -- the real XMLheads were trying to make RELAX NG happen, but XSD got jammed through because it was needed for all those monstrous WS-* specs.

XML did some good things for its day, but no, we abandoned it for very good reasons.


XSD was (is) not so easy to adopt, but I don't agree that it's a monstrosity.

Schema are complicated. XSD is a response to that reality.

The XML ecosystem is messy. But people don't need to adopt everything. Ignore Relax-NG, ignore DTD, use namespaces sparingly, adopt conventions around NOT using attributes. It generally works quite well.

It's a challenge to get comfortable with XSD but once that happens, it's not a monstrosity. Similarly, XSLT. It requires a different way of thinking, and once you get that, you're productive.


Also as someone else pointed out the same complaints that JSON Schema "isn't in the standard, it's a separate standard" apply to XSD. It is still a different standard even though during the height of XML mania it sometimes seemed like XSD was inseperable. XML did have DTD baked in, and maybe the author meant DTD in that section, but that was even worse than XSD (and again both were why RELAX NG happened).


xslt was a stripped down dsssl in xml syntax.

dsssl was the scheme based domain specific "document style semantics and and specification language"

the syntax change was in the era of general lisp syntax bashing.

but to xml syntax? really? that was so surreal to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: