Hacker News

story

Learn x86-64 assembly by writing a GUI from scratch (2020)(gaultier.github.io)

638 pointsthunderbong posted a year ago

152 Comments:

wudangmonk said a year ago:

Being self-taugh I decided what better way to learn programming than starting with the basics?. Assembly was my first language, I could read and program in it so I considered like I knew the language.

It wasn't until I created a sinple 8086 emulator where you take the raw machine code instructions and translate those into not only the assembly instructions but actually emulate what those instructions do that I finally felt like I REALLY knew assembly.

My suggestion to others that want to learn assembly is to skip any assembly books. Using whatever language you want first start with a translator from machine code into assembly instructions, and then do an emulator. You only need to implement a small subset of the instructions, check out godbolt and translate some simple programs to know which instructions you need to implement.

Other than that all you really need is the 8086 manual, it has all the information there. I also found this site useful when implementing the flags https://yassinebridi.github.io/asm-docs/8086_instruction_set.... This takes less time than finishing a book and you learn a LOT more.

The goal is not to program in assembly at all but to truely understand the cost of everything and what you can expect from your hardware.

nerpderp82 said a year ago:

Becoming skilled at GDB and knowing how to generate assembly listing from your tooling is a key skill that really helps with understanding.

I learned assembly by reading the assembly listings from the C compiler. It is extremely interesting to be able internalize how high level constructs are compiled and optimized.

BlackLotus89 said a year ago:

There was a reverse engineering guide that I quite liked that introduced you to assembly by first writing c examples, compiling them and then analyzing the disassembled output.

It was quite a long guide, but I would recommend it to anyone starting out. I don't have it in my bookmarks it seems, but I will try to update my comment tomorrow when/if I find it.

Edit: damn I guess it was https://beginners.re/ before it became pay-walled. Web archive still has copies of the book, but if you like it you should consider buying it even if it means signing up for patreon m-( I still got a few versions of the book somewhere as well. Have to dive in again to see if it is as good as I remember

circuit10 said a year ago:

https://godbolt.org/ is great for this

harry8 said a year ago:

Doing it locally is better, for mine.

eg

  https://gitlab.com/hal88/junkcode/-/blob/master/c_template.c
single file, chmod +x, compiles itself and executes the binary, can easily give the -S flag to gcc or clang (uncomment one line) or better yet run objdump on the binary.

The great thing is it still works if you include a bunch of local headers that are a hassle to supply to godbolt. Latency locally is a win.

The #if 0 trick for the compiler lines and #else for the code is a good one. Got it from Rusty Russell of iptables fame iirc. Write your script in C, why not?

no_news_is said a year ago:

Do you have that set as private? Even logged in, I get:

  Page Not Found

  Make sure the address is correct and the page hasn't moved.

  Please contact your GitLab administrator if you think this is a mistake.
claytonaalves said a year ago:

This repo is private. Can u make it public ?

harry8 said a year ago:

I'm sorry, I thought it was public. Can't work out how to change it. Nuts.

  #if 0 //instructions to build and run
  THIS_FILE=$0
  BIN_FILE=/tmp/$(basename $0)
  gcc -std=c11  -O0 -g -march=native $THIS_FILE -Wall -Wextra -o $BIN_FILE
  if [ $? -ne 0 ]; then
      echo "Bug in your C code or there is something wrong with your operating system's c compiler..."
      exit 1
  fi

  # run it
  $BIN_FILE "$@"
  retval=$?

  # uncomment below to examine the generated machine code
  #
  # objdump -DC $BIN_FILE | less -p '<main>'

  # uncomment below to examine the assembly language the compier
  # thinks it is generating
  #
  # gcc -S -std=c11 -O0 -g -march=native $THIS_FILE -Wall -Wextra -o ${BIN_FILE}.s
  # vim ${BIN_FILE}.s
  
  # clean up
  rm $BIN_FILE
  exit $retval

  #else // c program

  #include <stdio.h>
  #include <stdint.h>
  
  
  
  int main(int argc, char **argv)
  {
      for(int i = 0; i != argc; ++i) {
          printf("argv[%d] = %s\n", i, argv[i]);
      }
  
  }
  
  #endif //end c program
ykonstant said a year ago:

Damn, I have to sign up to see the code?

dundarious said a year ago:

Casey Muratori (of Handmade Hero fame, works/worked at RAD Game Tools for a long time) just did that as part 1 of his Performance Aware Programming series he’s doing on his Substack: https://www.computerenhance.com/

The “homeworks” only require implementing basic data transfer, arithmetic, and logic instructions, but I enjoyed it so I implemented everything except the interrupt handler stuff (into, etc.), and the BCD stuff (aaa, etc.).

I agree that it’s a good way to learn, and Casey provides a reference implementation.

gyulai said a year ago:

Vaguely related, since a lot of people here are mentioning assembly as a first programming language for learners: Knuth's "The Art of Computer Programming" and its associated fictitious "MMIX" processor ("MIX" on the volumes that aren't yet on their new edition).

Knuth's reasoning seems to be that higher-level languages go in and out of fashion all the time, but hardware and its associated assembly is quite sticky, so it's more "timeless". It's also a smaller set of primitives, so less overwhelming for the learner.

MMIX assembly is easier to understand than x86, having been designed specifically with learners in mind, and GCC even has a backend for MIX, so you can write C code and see how GCC would translate it to MIX assembly.

JohnFen said a year ago:

I did something very similar. Assembly was not my first language (it was my 4th), but I decided to learn it by writing a compiler and linker in it.

In for a penny, in for a pound.

> The goal is not to program in assembly at all but to truely understand the cost of everything and what you can expect from your hardware.

Entirely this. Also, to help you understand more deeply how computers really work.

That said, being able to program in assembly is still of great use to me. I do it to this day, usually on ARM processors -- not entire programs anymore, but critical parts.

rajeevk said a year ago:

My approach to learn assembly was to let the C compiler generate assembly (gcc -S -c) from C code and then read the assembly to see how C code is mapped to assembly code. I have written detailed article on this here https://www.avabodh.com/cin/cin.html

29athrowaway said a year ago:

I would suggest:

Step 1. Implement a simple calculator

Step 2. Create a file format that encodes sequences of instructions and operands for your calculator

Step 3. Create an interpreter for that file format that runs your file. Add an accumulator and flags that represent overflow and such. And a instruction pointer.

Step 4: Add comments support to your file format (optional)

Step 5. Add support for logical operators, comparisons to your file format and interpreter

Step 6. Add support for labels and jumps to your file format and interpreter

Step 7. Add support for a stack, memory and related operators to your interpreter

In the end you should end up with something like

https://yjdoc2.github.io/8086-emulator-web/compile

jart said a year ago:

Why not recommend the computer program whose docs you're linking? Emu8086 is one of the nicest tools I've ever used. It's probably unobtainable these days though since last time I checked it's no longer for sale. I have it though if anyone wants to do a midnight rendezvous.

wudangmonk said a year ago:

I never realized it was a program until just now that you mentioned it. I only ever used that single page I linked when I came upon it one day while searching for how instructions affected the flags register.

moreice said a year ago:

I can see the value of writing an emulator, but what was the benefit of translating from machine code to assembly instructions?

shzhdbi09gv8ioi said a year ago:

> but what was the benefit of translating from machine code to assembly instructions

In order to read it, I suppose? There's no reason for memorizing binary encoding schemes. I mean, you will learn that 0x90 is NOP on x86 but that doesn't help you a whole lot.

snickerbockers said a year ago:

x86 performance is usually better with smaller instructions, which can be accomplished by writing code that doesn't require prefex bytes and also using certain instructions that sometimes have alternative shorter encodings for specific registers.

Sirenos said a year ago:

When you say emulation, do you mean at the logic gate level?

steppi said a year ago:

This is a really cool little example. I've been teaching myself assembly recently and have found Learn to Program with Assembly (2021) [0] by Johnathan Bartlett to be really valuable. I had initially looked through his freely available book Programming From the Ground Up (2003) [1], which covers x86 assembly, and ended up buying the updated book after finding the old one to be well written but out of date. I've been programming in C for a long time and it's been very cool to dig a little deeper and understand better what's really going on under the hood.

[0] https://www.bartlettpublishing.com/site/books/learn-to-progr...

[1] https://download-mirror.savannah.gnu.org/releases/pgubook/Pr...

anta40 said a year ago:

Someone ported PGU code to MacOS: https://github.com/lmartinho/pgubook-macos-x86-64

Seems very handy, since most assembly tutorials nowadays are Windows/Linux-specific.

userbinator said a year ago:

Always use the standard function prologs and epilogs

This sounds like another one of those common "learn Asm by acting like a compiler" articles, which IMHO completely misses one of the best reasons to learn Asm: you can beat the compiler on size (relatively easy), speed (often harder), or both, precisely by not acting like one. I suspect the author, like so many others, also learned from only reading (some) compiler output. The complete lack of any use of static initialised data is shocking.

    mov rdi, rdi
    lea rsi, [rsp]
Please don't do this. Even a compiler can do better at O0.

Stripped and OMAGIC (--omagic linker flag, from the man page: Set the text and data sections to be readable and writable. Also, do not page-align the data segment): 1776 bytes (1 KiB)

Besides being a very notable date (was that deliberate?), 1776 is closer to 2k than 1k. I suspect if you wrote it in C with inline Asm for the syscalls, it wouldn't be much bigger (and may even be a little smaller.)

If you want to see what Asm can really do, the sub-1k categories in the demoscene are well worth looking at.

jart said a year ago:

Like the Lambda Calculus in 383 bytes. https://justine.lol/lambda/

freedomben said a year ago:

What sort of jobs are there these days that use assembly? Is anybody still using it directly?

These are pretty non-specific, but these are area I know about already for others who may have the same question as me:

1. Compiler development

2. Security research (malware analysis/reverse engineering) - although not much if any writing assembly, just reading

3. Kernel development - again mostly just reading assembly, not writing it. Bulk of code written in C (or potentially a very recent development, rust)

4. Driver development - mostly C but some devices can involve assembly

zxexz said a year ago:

You'd be surprised how often knowing assembly can come in useful - I certainly never expected it. I work in the healthcare sector, which is infamous for having tons of legacy software. At least a couple times a year I end up finding it useful to load some ancient binary into radare2 or Ghidra for debugging, extracting data, or just adding a jmp to avoid a problematic syscall. I'm by no means an assembly expert, but know enough to get the job done.

npsomaratna said a year ago:

Yup. I've not used assembly at work, but it's come in useful at home. A couple of times, I've had to insert or adjust JMP commands to get legacy software to function.

retrac said a year ago:

Small embedded systems. There are microcontrollers that cost like 3 cents in bulk. 8-bit machines with a few kilobytes of PROM and perhaps just 64 bytes of RAM. While such machines often do have C compilers (of a sort) for them, old-school optimization techniques sometimes come into play.

lost_tourist said a year ago:

I used to enjoy that stuff, but these days if it seems like a job requires any significant assembly, I just turn it down. I hate worrying about every single byte of memory, it takes all the fun out for me, but I do know those who love figuring out a tough problem and always having to be efficient with every bit and byte.

aidos said a year ago:

There are tough problems at every layer of the stack. Granted, the problems look very different, but they’re no less challenging. I think that is one of the great things about being a software developer - wherever you look, there are interesting things to explore. I studied assembly some 20+ years ago and have barely seen it since, though I’ve worked on a lot of complex technical problems since then.

mysterydip said a year ago:

That's me. I love figuring out more memory-efficient or less-cycle-use ways to do the same thing on my microcontrollers. Save x bytes here to add new feature y. Maybe it's nostalgia for an era of programming I missed out on (80-90s DOS game development)? Or it just scratches some itch in my brain.

JohnFen said a year ago:

Heh, different strokes for different folks. For me, the closer I am working to the metal, the more I enjoy the work. I especially enjoy squeezing every drop out of the system, so worrying about every byte, and minimizing cycles, is part of the fun.

sfink said a year ago:

Anything where you get crash reports back from the field. It is very valuable to be able to read assembly code and map registers to their purpose, and then perhaps back to the source code that generated the assembly. Debuginfo will sometimes give you some of that, but is unreliable, incomplete, and can be hard to match up to the stripped binary you're looking at. Recognizing values that are likely to be stack vs uninitialized or poisoned vs corrupted vs nullptr or offsets to nullptr... it can turn a crash report from absolutely cryptic into something that gives you the lead you need.

(Also, if you are dealing with something with mass deployment, it's good to recognize the single-bit flips that are hallmarks of bad RAM. But don't assume too much; bit flips are also the sign of bit flag manipulations.)

zerkten said a year ago:

According to friends reading is still fairly prevalent for Windows and other products at Microsoft. Kind of a requirement to succeed in jobs with a C/C++ product where you might only have memory dumps to debug. It's also expected to some extent if you are a performance guru in some areas.

jcranmer said a year ago:

Any sort of performance engineering will likely require competence with assembly, although direct programming in assembly may be relatively rare in such roles.

steppi said a year ago:

Another example is writing hand optimized matrix and vector operation routines tailored to specific hardware for BLAS libraries [0].

[0] https://en.m.wikipedia.org/wiki/Basic_Linear_Algebra_Subprog...

KeplerBoy said a year ago:

Is this really still a thing?

Do people go further than using instrinsics for let's say AVX?

retrac said a year ago:

Sure. You'll see it very often in codec implementations. From rav1e, a fast AV1 encoder mostly written in Rust: https://github.com/xiph/rav1e/tree/master/src/x86

Portions of the algorithm have been translated into assembly for ARM and x86. Shaving even a couple percent off something like motion compensation search will add up to meaningful gains. See also the current reference implementation of JPEG: https://github.com/libjpeg-turbo/libjpeg-turbo/tree/main/sim...

riceart said a year ago:

> Is this really still a thing?

Why wouldn’t it be? Compilers haven’t advanced tremendously in the past two decades in terms of optimizations and don’t have much new to add to high performance SIMD numeric kernels.

steppi said a year ago:

Yeah. I'm going to be helping to work on expanding CI for OpenBLAS and have been diving into this stuff lately. See the discussion in this closed OpenBLAS issue gh-1968 [0] for instance. OpenBLAS’s Skylake kernels do rely heavily on intrinsics [1] for compilers that support them, but there's a wide range of architectures to support, and when hand-tuned assembly kernels work better, that's what are used. For example, [2].

[0] https://github.com/xianyi/OpenBLAS/issues/1968

[1] https://github.com/xianyi/OpenBLAS/blob/develop/kernel/x86_6...

[2] https://github.com/xianyi/OpenBLAS/blob/23693f09a26ffd8b60eb...

KeplerBoy said a year ago:

interesting stuff. thanks for the links

mikebenfield said a year ago:

FWIW I've found that compilers' code generation around intrinsics is often suboptimal in pretty obvious ways, moving data around needlessly, so I resort to assembly. For me this has just been for hobby side projects, but I'm sure people doing it for stuff that matters run into the same issue.

hu3 said a year ago:
sgt said a year ago:

Interestingly, I believe Go enforces the mnemonics to be UPPER CASE.

pjmlp said a year ago:

Not only that, the assembly isn't quite the real one, that is yet another thing they took from Plan 9 compilers.

https://go.dev/doc/asm

CodeArtisan said a year ago:

Video encoders usually have a decent amount of assembly code.

eg:

x264 https://code.videolan.org/videolan/x264/-/tree/master/common...

ffmpeg https://git.ffmpeg.org/gitweb/ffmpeg.git/tree/HEAD:/libavcod...

if you go up, you will find folders for other architectures (ARM, MIPS, SuperH, ...)

z3t4 said a year ago:

For programming CPUs that cost less then 1$. Like sensors. For low power usage. Or small form factor.

junon said a year ago:

> mostly just reading assembly, not writing it

Not always the case. You're not writing it all the time but you still have to write it. For example the trampoline I use to jump from the boot stage to the kernel entry point is common-mapped between the two memory spaces and performs the switch inside of it, and then calls the kernel. That's all in assembly.

fuzztester said a year ago:

Developing hardware diagnostic utilities can be another area.

The kinds of utilities that come built into ROM, or that you run from a CD or USB drive, where you test memory and disk by writing different bit patterns to them, reading them back, and checking if they match, probing the hardware, processor and peripherals, etc.

eschneider said a year ago:

Board bring up usually needs a bit of assembly. Certainly needs some reading knowledge of assembly.

__loam said a year ago:

Apparently there's been some renewed interest in the Game Boy Advance as a platform for indie development (as well as other retro platforms). Programming on those requires some assembly knowledge, though I'm not sure you could call that a job.

Hackbraten said a year ago:

Some software packages written in assembly during the 70s and 80s are still in production today, and may be difficult and expensive to replace. I did some contract work for a steel plant in 2018. The primary control system for the plant was written in assembly. They were in the middle of doing a full rewrite, but in the meantime, they had to do maintenance and bugfixing for the in-production system in assembly.

PartiallyTyped said a year ago:

Hypervisor work also involves assembly.

JohnFen said a year ago:

> What sort of jobs are there these days that use assembly? Is anybody still using it directly

I use assembly on the regular in my embedded systems work. It lets you get away with using a lower-spec (and therefore cheaper) microcontroller than you could otherwise.

jamesfinlayson said a year ago:

I've worked at a place where the core system was still 10% or so assembly (it was written in the 1970s). I'm not sure if it needed much modification but it was absolutely business critical.

duped said a year ago:

Writing it from scratch is not nearly as common as reading it and understanding it. I think pretty much every systems programmer will have to stare at disassembly output from time to time.

bryanlarsen said a year ago:

Also bootloader development will usually require some assembly.

boffinAudio said a year ago:

Pro Audio dev here .. Assembly is well and truly entrenched in this industry. DSP programmers still live by it in the ANC and plugins worlds ..

jandrese said a year ago:

I'd guess there are more jobs that use assembly than jobs where you write the X server protocol directly to the socket.

TheLoafOfBread said a year ago:

5. Emulators - You will be trying to understand every instruction as deeply as possible.

panxyh said a year ago:

High end malware development.

nvy said a year ago:

Could you elaborate, or provide a link as a jumping off point for someone who wants to learn more about this topic?

lost_tourist said a year ago:

It's the difference between being a script kiddie and an actual hacker/cracker. Any web search will turn up thousands of links on hardware hacking at all levels.

nvy said a year ago:

That's not really what I'm asking, though. Parent claimed "high-level malware development" happens in ASM, but as far as I know a good chunk of sophisticated malware (stuxnet, wannacry, etc.) are written in plain ol' C or C++, so I categorically disagree that the differentiator between "script kiddie" and "leet haxor" is in whether or not someone writes assembly.

But I'm interested in reading about malware written in assembly and was hoping for a diving board into that particular pool.

freedomben said a year ago:

> as far as I know a good chunk of sophisticated malware (stuxnet, wannacry, etc.) are written in plain ol' C or C++, so I categorically disagree that the differentiator between "script kiddie" and "leet haxor" is in whether or not someone writes assembly.

Indeed. It's also useful to differentiate between malware and exploits (although the former often includes the latter). Exploits it's common to use assembly when finding and developing the exploit, but unless you're severely byte constrained you're just gonna use tools to generate your shellcode instead of hacking it out by hand. Even then there are tons of pre-written shell code snippets you can reuse from places like metasploit. The number of jobs where you're paid to write an exploit are small unless you can get on an elite team in a government agency (or contractor). Malware on the other hand is mostly just written in higher level languages like C

timacles said a year ago:

Check out the legendary Poc||GTFO articles: https://pocorgtfo.hacke.rs/ they are a treasure trove for this sort of information.

High level hacking requires assembly because you're trying to reverse engineer opaque APIs that aren't meant to be interfaced with. What other way is there to do that other than trying to examine what things are being moved to which memory addresses

slt2021 said a year ago:

game engine development

nerpderp82 said a year ago:

Reading stack traces and low level tracing logs.

bufo said a year ago:

Deep learning work when optimizing inference.

josephcsible said a year ago:

> Note that Linux has a ‘fun’ difference, which is that the fourth parameter of a system call is actually passed using the register r10.

Why is Linux singled out there? No OS can use rcx for that, since the syscall instruction itself overwrites rcx with the return address.

fsckboy said a year ago:

are you saying "they couldn't use rcx so they use r10, just like everybody else"? Because the quote says r10 and you brought up rcx

in any case, there's a good discussion of registers and syscalls here

https://stackoverflow.com/questions/53290932/what-are-r10-r1...

josephcsible said a year ago:

Here's the full quote:

> Following the System V ABI, which is required on Linux and other Unices for system calls, invoking a system call requires us to put the system call code in the register rax, the parameters to the syscall (up to 6) in the registers rdi, rsi, rdx, rcx, r8, r9, and additional parameters, if any, on the stack (which will not happen in this program so we can forget about it). We then use the instruction syscall and check rax for the return value, 0 usually meaning: no error.

> Note that Linux has a ‘fun’ difference, which is that the fourth parameter of a system call is actually passed using the register r10.

That makes it sound like it's only Linux that uses r10 and every other OS uses rcx as the 4th parameter with syscall, but really no OS uses rcx with syscall.

laxd said a year ago:

Some use stack for syscall params.

josephcsible said a year ago:

On amd64? Which ones?

jagged-chisel said a year ago:

Seeing the headline, one could be scared away thinking this is bare metal from scratch. It is not.

The app is an X11 client and will run under an OS, meaning you’ll learn to make system calls and other library calls to get things on the screen. Very educational, and not scary-deep.

tengwar2 said a year ago:

It's also not that hard to write a simple real bare-metal GUI in assembler. I had to built some lab apparatus under OS/2 1.0, which didn't have a GUI, and most of it was in assembler. It's a while back, but from memory, the main bits I needed were lines and circle segments (using Bresenham's algorithm); mouse pointer draw/refresh; writing text (I copied bitmaps for VGA text from MS/DOS); menus; text entry; and mouse drag as a way of selecting an area or marking a circle. Apart from the mouse pointer, it was single-threaded and pretty simplistic, but good enough to be in use for several years. It had to be: it wasn't easy to move to OS/2 1.1 and get the Presentation Manage GUI because they changed the memory model and for performance reasons some of the bit-bashing I needed for a non-GUI part of the system was done with a large amount of wire-wrap. Anyway, the bare-metal GUI bit was really pretty simple to implement.

zerkten said a year ago:

Writing Win32 programs in assembly was a niche in the late-90s. This post inspired me to do some googling for a project I was familiar with back then and discovered the author has brought it back to life at https://github.com/ThomasJaeger/VisualMASM.

FartyMcFarter said a year ago:

If I remember correctly there were websites with tutorials naming this style of programming "win32asm". This is the one I remember:

http://www.afturgurluk.net/documents/Info/Win32ASM/Iczelion%...

anta40 said a year ago:

The MASM SDK project is still alive and pretty active.

https://www.masm32.com/download.htm.

Nowadays I mostly work on Linux and Mac, and wonder why there's no equivalent project exist. Perhaps those Unix coders are satisfied enough with C...

pjmlp said a year ago:

Not sure why I would use that instead of the Windows SDK proper.

https://learn.microsoft.com/en-us/cpp/assembler/masm/microso...

Regarding UNIX, due to macro assembler nature of K&R C, and the spread of UNIX source tapes, there was never an Assembly culture like on the home computers where we had an integrated experience, of hardware and OS, that defined the platform.

It is similar to how John Carmack describes the IDE culture from those of us that grew up with those platforms, versus UNIX.

anta40 said a year ago:

MASM SDK itself use various tools from VS (including the assembler "ml.exe"). What makes is more useful are various macros/functions bundled in MASM32lib which makes programming Windows in assembly easier (this lib are still regularely updated).

If you don't like MS assembler, the good thing is you can use open source ML-compatible assemblers like:

- https://github.com/JWasm/JWasm

- https://github.com/nidud/asmc

- https://github.com/Terraspace/UASM

datpiff said a year ago:

> Nowadays I mostly work on Linux and Mac, and wonder why there's no equivalent project exist. Perhaps those Unix coders are satisfied enough with C...

I think a lot of the win32asm scene overlapped with defeating copy protection and making mods/cheats for games, not really Linux and Mac territory (at the time at least)

anta40 said a year ago:

And very likely for virus/trojan etc which unfortunately give the project bad reputation

SeenNotHeard said a year ago:

One of the great video games of the late 90s, Rollercoaster Tycoon, was coded in assembly. Even back then, that was considered a feat.

matheusmoreira said a year ago:

> ‘porting’ this program to, say, FreeBSD, would only require to change those [system call numbers]

This is not correct. Only Linux has a stable kernel-userspace interface which allows you to depend on these numbers. In pretty much every other operating system, you are required to use the system libraries they provide. Compiling a program with these numbers hardcoded into them will cause them to break when the OS developers change the syscalls.

I wrote an article about this with more details and lots of citations:

https://www.matheusmoreira.com/linux/system-calls

toast0 said a year ago:

FreeBSD may not promise stable system calls, but it usually delivers.

The documented procedure for upgrades is update the kernel, reboot, update userland, restart deamons as needed. That only works if the old userland can run on the new kernel, so usually it can. Certainly there have been exceptions, but in my experience, they've all been fixed.

FreeBSD documentation includes a guide on assembly programming [1], which means libc isn't the only acceptable interface to the kernel.

Staticly linked executables are supported by the included compiler, those are expected to continue to work after kernel updates, which requires syscall stability.

Syscalls that take structures at risk of changing sizes generally include size as a parameter or within the structure, and the transitions to different sizes haven't always been easy (cpuset increases were rough in the past, but the next one will be better), but the syscall interface didn't change really.

[1] https://docs.freebsd.org/en/books/developers-handbook/x86/

matheusmoreira said a year ago:

Thay's just the thing... As far as I know Linux is the only one which does make that promise. If user space programs break it's a bug in the kernel.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

Other systems have the same "usually works but no promises" disposition which means they can't be relied upon. Things work fine until they don't. When breakage occurs they take no responsibility for them because they didn't promise anything to begin with.

Torvalds says today's Linux can run binaries from the 90s, that's how serious this guy is about ABIs. I've never seen other operating system developers claim anything of the sort. Not even Microsoft.

gigel82 said a year ago:

The title is confusing. What is a GUI from scratch? A bootloader / mini kernel with framebuffer? A win32 application?

Should probably be something like "Writing a Linux X11 application in assembly".

asveikau said a year ago:

> I will be using the Linux system call values, but ‘porting’ this program to, say, FreeBSD, would only require to change those values

Is that true? I remember ~20 years ago I was looking at the i386 syscall ABIs (since amd64 wasn't big then), and there, Linux syscalls passed arguments by register and FreeBSD passed them on the stack. Maybe for amd64, FreeBSD switched to pass by register on Intel, but I wouldn't assume a syscall ABI is such a quick and simple substitution.

bitshiffed said a year ago:

For amd64 they both use the same registers to pass arguments.

But, the BSD syscalls use the carry flag to indicate error, rather than the returned value of rax being negative. If your syscalls always succeed, and never return values within what would be a negative range as a signed value, then the code would run; but that's not exactly "portable".

troad said a year ago:

For anyone interested in x64 assembly, it's worth noting that a new edition of Jeff Duntemann's excellent and classic introductory book on assembly, now fully updated for x64, is sitting with his publishers and is likely to be out sometime around the summer.

Source: http://www.contrapositivediary.com/?m=20230222

samsquire said a year ago:

This is awesome, thanks for submitting and thanks to the author.

* I would like to understand the assembly used for exception handling. Does anybody know how exceptions work at an assembly level? (I am interested in algebraic effects)

* Need to create a closure in assembly.

* I have some assembly ported to GNU assembly based on a blog post whose website is down that executes coroutines.

Findecanor said a year ago:

Both of those topics are rabbit holes to fall down into and discover a whole lot. There is not one way to do either, and there are different conventions for different platforms, languages and compilers.

I'd suggest to start with the paper "Aspects of implementing CLU" from 1978 that covers both CLU's early type of exception handling and Iterators, which are a form of closures. To find out how modern C++ - style exception handling is done, read "Itanium C++ ABI" (yes, Itanium !), which most of the Unix world used as template for x86-64 and AArch64 later. Then look up "Zero overhead deterministic exceptions" for a proposal for C++ that didn't get picked.

justinhj said a year ago:

Great to see a CLU mention here. There are number of interesting papers and documents floating around but it's rarely mentioned, presumably because it was always a research language and only used by a handful of people in industry. The parameterized type system has features only recently rediscovered in Rust and in C++23.

pjmlp said a year ago:

And that Go could have used all along, and they even acknowledged that.

> We would have been well-served to spend more time with CLU and C++ concepts earlier.

From https://go.googlesource.com/proposal/+/master/design/go2draf...

Another feature is the checked exceptions that everyone blames on Java, which were already present in Modula-2+, Modula-3 and C++, all of them inspired by CLU.

toast0 said a year ago:

> * I would like to understand the assembly used for exception handling. Does anybody know how exceptions work at an assembly level? (I am interested in algebraic effects)

Assembly doesn't really have a concept of exceptions. System defined exceptions and handlers exist, like if you're on x86 and run in protected mode, you can get a processor exception if you access a memory address that's not mapped for the type of access you do; that functions more or less like an interrupt; if you're running in an operating system, the operating system will handle that in some way, and maybe pass that information to your program in some way (or maybe just kill your program), but again, that'll be defined by the system you're on, and we can't talk much generally. On some systems you can get an exception for math errors (divide by zero, overflow, etc), on others you have to test for them, some systems will generate an exception for unaligned data access, some won't, etc.

> * Need to create a closure in assembly.

Again, this isn't really an assembly concept. You've got to define what a closure means to you, and then build that however you like. In my mind, a closure is more or less a function plus a list of variables, in assembly, I'd model that as the address of a function that takes several addresses as parameters, but passing parameters is up to you --- if you're calling your own functions, you don't need to follow any particular convention on parameter passing, it just needs to make sense to you, and be written in a way that does what you mean: the computer will do what you told it to, which isn't always what you meant.

sfink said a year ago:

"Awesome description of rocks. I would like to understand the rocks used for nuclear reactors."

samsquire said a year ago:

This is an amusing characterisation.

I would like to know how high level concepts map to assembly so I can understand how to compile to it.

I feel low level assembly gives so much freedom to decide how to do things.

I should probably get better at writing assembly so that I have inspiration on how to solve the high level things. But it's generations of technical ideas, solutions, implementation details and understanding I have to go through. I would like to understand exception handling to implement algebraic effects.

I also think structs are extremely useful and that it's amazing that sum types were invented.

saulpw said a year ago:

I would recommend writing a simple Forth "interpreter". Assembly is the easiest language to write a Forth interpreter/compiler in, it's not that difficult (on the order of 10 hours to get something working your first time, and 50-100 hours to implement some of the more subtle concepts), and it will blow your mind.

peterfirefly said a year ago:

> But it's generations of technical ideas, solutions, implementation details and understanding I have to go through. I would like to understand exception handling to implement algebraic effects.

and upstream:

> Need to create a closure in assembly.

"I would like to know how nuclear reactors work so I can build my own. I'd like to skip the Schrödinger Equation, differential equations, and linear algebra. And I want the nuclear reactor to run on thorium."

> I should probably get better at writing assembly

Yes.

Exceptions are not hard to understand once you know assembly language (any one of them). There are lots of blog posts you can look at. Algebraic effects are rather new and haven't been widely implemented yet ("thorium"). You most likely won't be able to find a pre-written document that spoon feeds you the implementation details.

A simple exceptions implementation uses a stack of "handlers". The code pushes and pops as it enters and leaves scopes. When an exception is raised, this stack is searched from top to bottom for a suitable handler (or maybe just the top handler is used). Precisely how depends on the implementation. The problem is that it's kinda slow to do all that work if exceptions are rare. Another implementation strategy is to have a table: the code address where the exception was raised is looked up in the table. That gives you the handler info. A bit more cumbersome if you have to handle linking. More so with dynamic linking. Even more so with run-time generated code.

Closures can be implemented in a billion and a half different ways. A common way is to allocate whatever local information ("captured variables") that the closure needs in a block on the heap. When the closure code is invoked, it gets a pointer to this block as a hidden parameter.

Of course, there are all sorts of code transformation tricks to complicate things...

Some you might run into often are transformations to and from SSA and CPS (+ the optimization transformations you can do on those):

https://en.wikipedia.org/wiki/Continuation-passing_style

https://en.wikipedia.org/wiki/Static_single-assignment_form

If you are interested in these things, you should really know some semantics and type theory:

https://en.wikipedia.org/wiki/Operational_semantics

https://en.wikipedia.org/wiki/Denotational_semantics

https://en.wikipedia.org/wiki/Type_theory#Technical_details

You do not have to know all the myriad variants, of course.

Learning a little assembly language is easy compared to semantics and type theory. Just learn your linear algebra and stop looking for shortcuts. It's like wanting to learn calculus while still being uneasy about fractions.

peterfirefly said a year ago:

Forgot to mention another kind of exception handling that is very common:

https://en.wikipedia.org/wiki/Microsoft-specific_exception_h...

https://learn.microsoft.com/en-us/cpp/cpp/structured-excepti...

Yes, Windows NT (and up) has a cross-language exception handling mechanism, separate from and in addition to whatever C++, Modula-3, etc have.

Croftengea said a year ago:

TL;DR: the article explains how to open a new window in X11 and print "Hello, world" in assembly. The asm code to achieve this is 618 lines long.

ripe said a year ago:

Thank you for summarizing! This adds a lot of color to the headline. I was imagining a framebuffer-based GUI.

pavlov said a year ago:

As a curiosity, it's worth mentioning there have been entire GUIs written in assembly. Probably the last commercially released one was GEOS a.k.a. GeoWorks Ensemble. It was a small and efficient GUI environment for x86 PCs, briefly somewhat popular as a Windows alternative around 1990.

Steve Yegge worked there and tells an interesting story. 15 million lines of hand-written x86 assembly!

http://steve-yegge.blogspot.com/2008/05/dynamic-languages-st...

"OK: I went to the University of Washington and [then] I got hired by this company called Geoworks, doing assembly-language programming, and I did it for five years. To us, the Geoworkers, we wrote a whole operating system, the libraries, drivers, apps, you know: a desktop operating system in assembly. 8086 assembly! It wasn't even good assembly! We had four registers! [Plus the] si [register] if you counted, you know, if you counted 386, right? It was horrible.

"I mean, actually we kind of liked it. It was Object-Oriented Assembly. It's amazing what you can talk yourself into liking, which is the real irony of all this. And to us, C++ was the ultimate in Roman decadence. I mean, it was equivalent to going and vomiting so you could eat more. They had IF! We had jump CX zero! Right? They had "Objects". Well we did too, but I mean they had syntax for it, right? I mean it was all just such weeniness. And we knew that we could outperform any compiler out there because at the time, we could!

"So what happened? Well, they went bankrupt. Why? Now I'm probably disagreeing – I know for a fact that I'm disagreeing with every Geoworker out there. I'm the only one that holds this belief. But it's because we wrote fifteen million lines of 8086 assembly language. We had really good tools, world class tools: trust me, you need 'em. But at some point, man...

"The problem is, picture an ant walking across your garage floor, trying to make a straight line of it. It ain't gonna make a straight line. And you know this because you have perspective. You can see the ant walking around, going hee hee hee, look at him locally optimize for that rock, and now he's going off this way, right?

"This is what we were, when we were writing this giant assembly-language system. Because what happened was, Microsoft eventually released a platform for mobile devices that was much faster than ours. OK? And I started going in with my debugger, going, what? What is up with this? This rendering is just really slow, it's like sluggish, you know. And I went in and found out that some title bar was getting rendered 140 times every time you refreshed the screen. It wasn't just the title bar. Everything was getting called multiple times.

"Because we couldn't see how the system worked anymore!"

...I have to say, the "140 redraws by accident" part sounds like an ordinary day in web UI development using 2023 frameworks. The problem of not seeing the entire picture of what's going on isn't limited to assembly programmers. You can start from the opposite end of the abstraction spectrum and end up with the same issues.

jaggederest said a year ago:

Roller Coaster Tycoon was almost entirely written in assembler by Chris Sawyer. Pretty amazing story, and released in 1999, as well, so well past the point most people had stopped doing 100% assembler development.

https://en.wikipedia.org/wiki/RollerCoaster_Tycoon_(video_ga...

viler said a year ago:

A couple of GUIs written in assembly this century are MenuetOS and KolibriOS.

pjmlp said a year ago:

Not only GUIs, entire operating systems.

Besides the early computing days, the 8 and 16 bit home computers were mostly Assembly.

Even if Amiga, Atari and Archimedes had their share of BCPL, C and Modula-2, MS-DOS was fully implemented in Assembly.

cf100clunk said a year ago:

Early in the 1990s Photodex wrote their CompuPic photo management program in assembly. The shareware version of CompuPic was popular for creating/editing/retouching lowball graphics when the www soon emerged.

masfuerte said a year ago:

I'm pretty sure the 90s painting app Xara Studio was also done in assembly.

mav88 said a year ago:

That wouldn't surprise me. The original Xara could render complex SVGs in under two seconds on a 486-66. The most optimized program I have ever used.

fuzztester said a year ago:

Two older assembly language programming books that I had checked out earlier, and thought were good, are ones by Randal Hyde and Paul Carter.

Both were for 32 bit assembly, not 64 bit, IIRC.

Paul Carter was a professor or lecturer at a US college.

I think his book was available online.

pacman128 said a year ago:

Paul Carter here. Yes, as someone already replied. It's online. I would have liked to update it to 64-bit, but I jumped to industry and don't have the time to do a decent job of it. I didn't realize that Randall had a 64-bit version out. I'm sure it's very good. We both used to hang out on comp.lang.asm.x86 back in the 90's.

fuzztester said a year ago:

Oh cool.

Thanks for making it available online.

fuzztester said a year ago:

>I think his book was available online.

http://pacman128.github.io/pcasm/#

Scroll down the page for the PDF book.

redsaz said a year ago:

Randal Hyde now has an Art of Assembly edition for 64-bit.

fuzztester said a year ago:

Cool, good to know.

herewulf said a year ago:

Excellent. Now how do I blit some pixels?

This takes me back about 30 years as a youngster discovering the magic ASM incantation to efficiently draw to the screen in DOS mode 0x13.

Havoc said a year ago:

I respect people doing this sort of thing but hard pass from me. Even my venturing into rust is plagued by doubts as to whether it’s too low level for productivity

bschwindHN said a year ago:

I would say Rust is quite high level. You can write entire, useful programs without once having to think about memory, pointers, allocation, etc.

Of course that won't be true for every program, but it's worlds away from asm.

Havoc said a year ago:

Agreed. Rust is already pretty comfortable.

Meant it more in the same that asm to rust is a jump and then there is another jump from say rust to python. I’m sitting on the rust rung and wondering whether that was the right choice so surprised people go one step further. (To each their own ofc)

sylware said a year ago:

If you write a wayland compositor in x86_64 assembly... (vulkan+drm on elf/linux), without abusing a macro processor and without obscene code generators...

jiffygist said a year ago:

Some useful gui program examples for winapi

https://www.davidgrantham.com/

Solvency said a year ago:

In 2023, does anyone who writes a compiler inherently have to know assembly?

Or even less recently...whoever wrote the first Rust, Zig, or insert <new compiled language> here?

Because don't you ultimately have to know how to make your own syntax translate into efficient assembly code?

Or is there someway these days for programming language designers/creators to avoid it entirely?

gamache said a year ago:

Compiler writers can target high-level languages too; it's not uncommon to see e.g., a Blub-to-C compiler which leaves the asm parts to a different toolchain. (Lots of languages without the goal of producing native code target even higher-level languages, for example JS.)

Another popular way to _sort of_ avoid assembly is to target the LLVM IR (intermediate representation), in which case LLVM takes care of optimization and producing processor-specific machine code for a bunch of CPU types. But LLVM IR is basically a fancy assembly language.

dahfizz said a year ago:

Llvm abstracts the "backend" which generates the actual assembly for each target machine. You only have to write a "frontend" that generates an llvm intermediate representation.

But in general, yes. To generate assembly you need to know assembly.

Solvency said a year ago:

Is LLVM sufficiently "simpler" to learn and wield than assembly, or does it just make it easier to compile to different systems?

jcranmer said a year ago:

LLVM is definitely more complex than a toy assembly you might learn in an intro computer architecture course, but it's generally somewhat less complex than working with real assembly languages. Although the complexity in LLVM is a very different kind of complexity from assembly languages; LLVM is ultimately a higher-level abstraction than machine code, and the semantics of that abstraction can be complex in its own right.

mrlonglong said a year ago:

I just tried this out, it always dies after calling x11_send_handshake, at the point where it reads back 8 bytes. It seems it expects the first byte NOT to be zero.

...

cmp BYTE [rsp], 1

jnz die

How can I diagnose the issue? The article didn't dive into the matter of reading error codes.

mrlonglong said a year ago:

To follow-up, the author really should have mentioned that access to X is controlled using cookies. In order for this program to work, one needs to temporarily allow any client access to X using 'xhost +'. To put back access control, use 'xhost -' to re-enable.

lost_tourist said a year ago:

If you're going to learn assembly for the first time I would say start with arm-64 assembly first, the architecture is much more refined and the assembler much more pleasurable to code with less foot guns and complication unless you are doing only the most basic of programs.

ndesaulniers said a year ago:

Writing a Linux application in _Intel_ x86 assembler syntax...smh. You do not know de wey

qayxc said a year ago:

It's a matter of personal taste. Some people (including myself) simply like Intel syntax better. As a sidenote, I find it quite fitting since Linux - unlike Unix - was "born" on Intel hardware after all :)

pjmlp said a year ago:

Intel Assembly should be written the way Lord Intel prescribed.

titzer said a year ago:

This is great! I'd like to write code to interface X11 without going through libx11 but I've not gotten around to reading the documentation around its binary format. This is a good start!

eschneider said a year ago:

You don't need assembly to do that. It's just another network app. :) Check out Adrian Nye's "X Protocol Reference Manual" to see how to talk X.

titzer said a year ago:

Sure, but any working starting point is a worthwhile read.

toast0 said a year ago:

If you want to be closer to X without reading the protocol documentation, you might look into xcb; it's much less abstraction than xlib.

maherbeg said a year ago:

Oh man, this brings me back to writing a hot key based application launcher in assembly for windows to learn assembly and the various tools for compiling and building things. Good times!

xurukefi said a year ago:

The "xor rax, rax" that I just saw at a quick glance makes me flinch. Still putting it on my reading list though. Sounds like a really interesting little toy project.

seritools said a year ago:

To explain the flinching (since I didn't catch it immediately):

> In 64-bit mode, still use `xor r32, r32`, because writing a 32-bit reg zeros the upper 32. `xor r64, r64` is a waste of a byte, because it needs a REX prefix.

(from https://stackoverflow.com/a/33668295/554577 )

satiric said a year ago:

Is there a practical reason to do this? I don't mean that disparagingly; it's a cool project and I can see its value. I'm just wondering if there's also a practical reason you might do something like this rather than just using Qt or HTML/CSS or whatever.

cubefox said a year ago:

Assembly "Hello World":

> just 600 lines of code

felcro said a year ago:

Please add a C implementation too to make it easier to understand what's going on by reading less code.

pkphilip said a year ago:

This is cool!

voidz7 said a year ago:

does this tutorial work on macos?

nmstoker said a year ago:

There are some pointers on that if you skim the tutorial.