This is why Language Integrated Query (LINQ) is so powerful in .NET. You never need to learn any ORM specific query language or syntax since you've probably learned to use LINQ with objects already and then you just continue to use it with a database.
True in the sense that you do not need to learn anything extra to build queries using an ORM. But it is not the complete picture: whenever you use an ORM, you should really know how the ORM works and what it does. For example, even if you use LINQ, you have to realize that the expressions you use in the C# code cannot always be translated to SQL. So, you need to know the ORM and its limitations to be aware of what is possible and what is not. An ORM cannot magically translate the code behind a derived property into SQL.
ORM should definitely not be regarded as a technology that allows you to use SQL and RDBMS without knowing those technologies. ORM should be regarded as a technology that can make you much more efficient when using SQL and RDBMS, when you already have good knowledge on SQL and RDBMS. An ORM should help you with all the 'routine' SQL stuff. And a good ORM will support you using straight SQL whenever you want to do something that the ORM does not support well.
I love LINQ, but I've noticed that there are people that think they know LINQ, and then think they can use an ORM because of that. And in the end, they do not really know LINQ, they do not really know how an ORM works, and their code generates SQL queries that have really poor performance. And then the obvious conclusion is that ORMs are bad. Well.... no, they are not, they are very powerful tools, but it requires knowledge to use them well.
Yes, and LINQPad lets you write very powerful ad-hoc queries with results that are significantly easier to navigate. It's much better than using SQL management studio to view your database.
FWIW: I shipped a C# product with a very simple SQLite database. When the application started LINQtoSQL wasn't available (we used Mono,) so we just wrote our own queries.
Best decision ever. We didn't need to do the various futzing and learning curve that comes with learning an ORM. Granted, it only worked because we had a handful of tables, very basic CRUD, and infrequent schema changes.
Still, if you fall into the "I must use an ORM/hand SQL" camps, you're probably limiting yourself due to your biases.
Please elaborate on the limitations and paths! I'm doing technology selection research and EF Core is currently on the table, would love to hear what you encountered.
One of the most significant for our use case is that composite primary keys do not work with inheritance, which makes it impossible to setup constrained polymorphic relationships.
In general, if you use your database domain (classes) in your business logic, you will have pitfalls when working with your database. This either leads to the N+1 problem (when you use lazy loading), or data structures where a relationship will be null when you try to traverse it.
(Basically, if you use lazy loading, your code will be slow and may require major refactors late in the project life. If you use aggressive loading, your code may have bugs when expected relationships aren't populated.)
I should point out that this isn't purely an EF core limitation! The "correct" approach is to always copy objects from database domain (classes) to business logic domain (classes); but this is often more effort than it's worth.
I'd be really interested to know some recommended methods of realizing having separate db and domain classes. I'm actually in the early phases of a project where I have been using the Memento pattern to save and restore domain entities. I'm about ready to rip it out and switch back to persisting the domain classes directly in the db because it's so tedious. (using EF Core 7 and C#)
OS/2 was running it's own native programs, usually developed in C/C++ and compiled against it's O.S. API, which were very nicely designed for the time, much cleaner and better organized than most of the competition.
These are not easily portable as they rely on very specific and unique concepts. For example the UI elements (a window, an icon, but also folders and files in the filesystem views) were object oriented concepts in the API and in the UI system (made of WPS - workplace shell and PM - the presentation manager). If you wanted you could subclass the file object and define a new view for it, and all of the user interface would behave accordingly. Similarly you could subclass the folder view and for example add a new status bar, or a box with a shell at the bottom. This was very unique at the time and I believe in part still is today. The system was also in the background constantly tracking UI elements and file system content. So if you created an icon for a file (similar to a Windows shortcut) and then renamed the file, the icon would automatically be aware of the new name, because the file system would fire a "rename event" and the UI would be listening to such events and take appropriate actions. You never had a broken link; if you deleted a file from the command prompt the UI shortcut would be gone as well.
It was also programmable in Object Rexx, an scripting language that would be compiled into bytecode at the first execution of the script. The file system API allowed any program to attach custom attributes to files, and the Rexx runtime would attach the bytecode as a custom attribute to the file so future invocations would skip compilation unless the script was modified. Object Rexx had bindings for most if not all of the operating system APIs, so while Object Rexx interpreters exist for other operating systems, the scripts are hardly portable unless they are trivial.
OS/2 could run MS/DOS and Windows 3.1 program in a compatibility virtual machine. In this case however you would loose multi tasking for those programs because dos and Windows 3.1 did not have multi tasking. Those program would run only when in foreground.
OS/2 could run Java up to Java 1.2 programs.
And due to some volunteer effort there was a quite complete POSIX compatibility layer and a port of GCC which allowed compiling for OS/2 many Unix programs (Apache HTTP for example). There was also a port of XFree available but it was running full screen, so you either used the native UI, or you could switch to a XFree program which first would bring XFree to full screen and then you could work with the program window (of course alt-tabbing back to a native program would do the opposite)
It was very advanced in many ways, for example it supported so called "installable file systems": you could add support for a file system by loading its driver at boot time and when a file system did not support custom attributes, they would be saved in a special hidden file in the root folder. It officially supported HPFS (its own), FAT, JFS, and there existed IFS implementations for Ext2, NTFS and more which were mostly ports from early Linux.
The UI was super customizable, you could have hot corners, multiple virtual desktops, you could individually customize fonts and background of each folder (you simply dragged the font from the front palette to the folder and your preference was recorded in the file system custom attributes together with last size and position of the folder window). Similarly, to change the icon of a program, you had to open preferences and drag and drop the new icon over the previous one, so for many things it had approaches that are similar to those in place in MacOs.
I remember using some old IBM IDE that was implemented by subclassing the UI views for files and folder and enabling those special views on the file system folders representing your projects. A whole IDE integrated into the desktop by subclassing standard desktop objects. As an example, the standard file system explorer window would show IDE-specific columns in details view and a toolbar with build/run/debug buttons when you were browsing a folder which corresponded to an IDE project. Cool idea even though not perfectly executed.
Good memories.
Edit: fixed some typos and clarified last paragraph with an example.
You forgot about SOM, much nicer to use than COM, with support for implementation inheritance and meta-classes, for Smalltalk, C, C++ and eventually Java as well.
Is my recollection correct that the screen origin (for drawing stuff) is bottom left, instead of top right as it is in most operating systems? I seem to recall reading an article about how this single fact made porting software between OS/2 and other OSes quite tedious.
I am sorry I can't help you here.
The only UI problems I implemented were rather simple, built with IBM VisualAge which had a nice drag&drop visual editor for the screens and dialogs. http://www.os2ezine.com/v1n11/vacpp1.gif
> Is it not possible to cross-compile OS/2 software?
No idea. Back in the day the hardware requirements would have been prohibitive, however you could install a compiler such as Visual Age for C++ in a OS/2 VM and download technical documentation such as the famous IBM Red Books.
I guess it can be argued that Finland joining Nato is not very surprising (has been discussed in Finland for years) and therefore not very interesting. But Sweden joining Nato this quickly would be more of a surprise and therefore much more interesting.
Imagine all the time and energy wasted having all these popular programming languages re-inventing and -implementing the same features over and over again and people discussing and arguing about them.