Nicolas Hafner: GDC, fighting, and more - August Kandria Update

A big ole update for Kandria this month!


One of the main events this month was GDC. This was my first time attending, and I'm really glad that I was offered the chance to with the great support by Pro Helvetia and SwissNex. Thank you very much again for all of your support this year!

The highlight of the event for me were the daily video networking sessions, where you were led onto a virtual floor with a bunch of tables. Each table could seat up to six people and you'd chat over audio and video. It was really cool to hang out and chat with fellow attendees about... well, pretty much whatever!

What wasn't so cool about those sessions, was that they only lasted for an hour, at the end of which you were unceremoniously booted out, without even the ability to see who you were at the table with, nor the accompanying text chat. I do not understand at all why that platform was not just up throughout the entirety of the conference. It's not like there were ever that many people there to begin with, I never saw the total number rise above 120.

With regards to the booth that we had at the Swiss country pavilion, that unfortunately turned out to be almost pointless. Everyone I talked to at the virtual meet wasn't even aware that booths existed at all, let alone the ones in the country pavilions. Being buried under ten links almost guaranteed that we wouldn't be found except by the most vigilant. It isn't surprising then that we didn't get any contact requests through the booth's message box.

I'm really confused as to why they had practically zero visibility for the showfloors and booths. It seems to me that those are a rather large part of a usual conference, so advertising them so poorly is strange. I can't even imagine what SwissNex had to pay to get our booths set up, and I feel sorry for what they got for that.

I hope that next year the presentation is going to be better, as for the amount of money involved in the tickets and everything, I have to say I'm quite disappointed with the GDC organisers. We'll see how Gamescom goes next month, I suppose.

Combat developments

This month was meant to be devoted to combat revisions. I did get a number of those done, so movement is now a lot more satisfying and you can actually juggle enemies in the air:

However, due to the mass of feedback amounted by the playtesting, GDC interruptions, and my general reluctance to work on it there hasn't been as much progress as I would like. I'll definitely have to revisit this again later down the road. In the very least though I feel like we have the features required in the engine now to make the system work well, it's "just" down to tweaking the parameters.

To give you some perspective on why tweaking of these parameters is a pain though, let me show you a screenshot of our in-house animation editor:

For each frame (the player has around 700 frames of animation at this point), you can set a mass of different properties that change combat behaviour and movement. Tweaking them one by one, performing the attack, going back to tweak it again, and repeating that is an extremely arduous process.

I tried my best to keep at it throughout the month, but the tedium of it made it quite difficult. I hope I can instead gradually improve it with time.

UI look and feel

I've taken some time to change the look and feel of the UI in the game. Most notably the button prompts now look a lot better than they used to. The fishing minigame shows that best:

The textbox also got a revision and it's now capable of doing text effects:

Aside from those tweaks there were a myriad of changes from the further blind playtest sessions as well.

Eternia update

A new update for Eternia: Pet Whisperer was rolled out. This update took two days to put together and includes important stability fixes for MacOS and Windows users. On MacOS the behaviour on Retina displays is improved and audio output on devices with samplerates above 44.1kHz has been fixed. General performance has also been improved. On Windows spurious crashes caused by antivirus software should now no longer occur.

Eternia will also likely be on sale during the upcoming Gamescom Steam sale, so keep an eye out for that!


This month has been about press and polish. With GDC happening, I helped prioritise the press list and reach out to the most relevant people; we didn't get a huge response, which isn't unexpected (press are inundated at the best of times, never mind during GDC) - but it's been encouraging to get some replies; and most importantly, it's getting the game on peoples' radars.

I've also been working through tons of feedback Nick has gathered from blind playtests of the game, using it to make the quests easier to understand and more accessible: finessing spawn locations, clarifying objectives more in dialogue, etc. I ran a few tests myself, and it's amazing how things you thought were obvious sometimes don't translate to the player. A highlight was watching my brother return to the surface after the first quest: he didn't know the rest of the settlement was over to the right, and instead went left, all the way back to the tutorial area; since Catherine was following him at the time, it re-triggered all her tutorial dialogue! Noooooooooo! (But in a good way :) )

Nick also made some cool new features like random spawners that can add a myriad of world-building items to the levels (and which later you'll be able to sell to Sahil); I spent time organising and placing these. Also, text effects and colouring! I went a bit bananas with this at first, using the swirly rainbow effect right off the bat in the tutorial, when Catherine gets excited; to be fair it's probably the only point in the game I could've justified using it. But I've since toned things back, as although the game has quirk, we need to watch the tone and make sure we don't go too cartoony. I've also established a basic visual language for using these effects and colours (hint: sparingly).


Fred has been working on new sets of animations and some concept work. Some of the new animations are for expanded combat moves, so there's now a heavy and light charge attack:


Mikel finished a couple more tracks, well on ... track to make the September release date with all the music for the current areas included! These tracks are for the camp area, and as such convey a more chill atmosphere:

Aside from the music there's also been more audio changes, by way of a new team member:


I'm Cai, I'm a Sound Designer for video games. I've been in the industry for 2 years! I focus on making impactful, meaningful sounds that help to immerse the player in the world we create! Kandria is a really exciting project to be involved with! The story and atmosphere of the game are both compelling and exciting so I'm thrilled to have the opportunity to contribute to this through the sound!

Cai has been hard at work remaking the existing sounds and adding new ones. One of the bigger changes in that respect is the addition of atmospheric sound layers that now underlay each music track. The atmospherics and music both change independently depending on the area the player is currently in.

It still needs some tuning, though. We need to make sure the loudness levels of each track work properly in conjunction with the atmospherics, and that the transitions between areas work smoothly without calling too much attention to themselves.


Let's look at the roadmap from last month with the updates from this month:

  • Polish the ProHelvetia submission material

  • Polish and revise the combat design

  • Finish the desert music tracks

  • Further blind playtesting and bugfixing

  • Revise the general UI look and feel

  • Implement a main menu, map, and other UI elements (partially done)

  • Create and integrate new sound effects for 90% of the interactions in the game

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

I'm quite confident I can get all the UI stuff done in time, so we should be well set for the Pro Helvetia submission!

Until the submission is done, be sure to check our mailing list for weekly updates, and the discord community for fan discussions!

Planet Lisp | 01-Aug-2021 11:48

Micha? Herda: Common Lisp Recipes, 2nd Edition

Let's talk a little about the second edition of Edi Weitz's Common Lisp Recipes! What would you like to see added or changed in it? What problems have you possibly faced that could be described in a new recipe?

Please let me know via mail, Fediverse, IRC (phoe at Libera Chat), or, if you absolutely have to, Twitter.

Planet Lisp | 29-Jul-2021 21:30

McCLIM: Progress report #12

Dear Community,

A lot of time passed since the last blog entry. I'm sorry for neglecting this. In this post, I'll try to summarize the past two and a half year.

Finances and bounties

Some of you might have noticed that the bounty program has been suspended. The BountySource platform lost our trust around a year ago when they have changed their ToS to include:

If no Solution is accepted within two years after a Bounty is posted, then the Bounty will be withdrawn, and the amount posted for the Bounty will be retained by Bountysource.

They've quickly retracted from that change, but the trust was already lost. Soon after, I've suspended the account and all donations with this platform were suspended. BountySource refunded all our pending bounties.

All paid bounties were summarized in previous posts. Between 2016-08-16 and 2020-06-16 (46 months of donations) we have collected in total $18700. The Bounty Source comission was 10% collected upon withdrawal - all amounts mentioned below are presented for before the comission was claimed.

During that time $3200 was paid to bounty hunters who solved various issues in McCLIM. The bounty program was a limited success - solutions that were contributed were important, however harder problems with bounties were not solved. That said, a few developers who contribute today to McCLIM joined in the meantime and that might be partially thanks to the bounty program.

When the fundraiser was announced, I've declared I would withdraw $600 monthly from the project account. In the meantime I've had a profitable contract and for two years I stopped withdrawing money. During the remaining three years I've withdrawn $15500 ($440/month) from the account.

As of now we don't have funds and there is no official way to donate money to the project (however, this may change in the near future). I hope that this summary is sufficient regarding the fundraiser. If you have further questions, please don't hesitate to contact me, and I'll do my best to answer them.


The last update was in 2018-12-31. A lot of changes accumulated in the meantime.

  • Bordered output bug fixes and improvements -- Daniel Kochmański
  • Gadget UX improvements (many of them) -- Jan Moringen
  • Text styles fixes and refactor -- Daniel Kochmański
  • Freetype text renderer improvements -- Elias Mårtenson
  • Extended input stream abstraction rewrite -- Daniel Kochmański
  • Implementation of presentation methods for dialog-views -- admich
  • Encapsulating stream missing methods implementation -- admich
  • indenting-output-stream fixes -- Jan Moringen
  • drawing-tests demo rewrite -- José Ronquillo Rivera
  • Line wrap on the word boundaries -- Daniel Kochmański
  • New margin implementation (extended text formatting) -- Daniel Kochmański
  • Presentation types and presentation translators refactor -- Daniel Kochmański
  • Input completion and accept methods bug fixes and reports -- Howard Shrobe
  • Clipboard implementation (and the selection translators) -- Daniel Kochmański
  • CLIM-Fig demo improvements and bug fixes -- Christoph Keßler
  • The pointer implementation (fix the specification conformance) -- admich
  • Drei kill ring improvements -- Christoph Keßler
  • McCLIM manual improvements -- Jan Moringen
  • Frame icon and pretty name change extensions -- Jan Moringen
  • Cleanups and extensive testing -- Nisar Ahmad
  • pointer-tracking rewrite -- Daniel Kochmański
  • drag-and-drop translators rewrite -- Daniel Kochmański
  • Complete rewrite of the inspector Clouseau -- Jan Moringen
  • Rewrite of the function distribute-event -- Daniel Kochmański and Jan Moringen
  • Adding new tests and organizing them in modules -- Jan Moringen
  • Various fixes to the delayed repaint mechanism -- Jan Moringen
  • CLX backend performance and stability fixes -- Christoph Keßler
  • PS/PDF/Raster backends cleanups and improvements -- Jan Moringen
  • Drei regression fixes and stability improvements -- Nisar Ahmad
  • Geometry module refactor and improvements -- Daniel Kochmański
  • Separating McCLIM code into multiple modules -- Daniel Kochmański and Jan Moringen
  • Frames and frame managers improvements -- Jan Moringen and Daniel Kochmański
  • Frame reinitialization -- Jan Moringen
  • PDF/PS backends functionality improvements -- admich
  • Menu code cleanup -- Jan Moringen
  • Pane geometry and graph formatting fixes -- Nisar Ahmad
  • Numerous CLX cleanups and bug fixes -- Daniel Kochmański and Jan Moringen
  • Render backend stability, performance and functionality fixes -- Jan Moringen
  • Presentation types more strict interpretation -- Daniel Kochmański
  • External Continuous Integration support -- Jan Moringen
  • Continuous Integration support -- Nisar Ahmad
  • Improved macros for recording and table formatting -- Jan Moringen
  • Better option parsing for define-application-frame -- Jan Moringen
  • Separation between the event queue and the stream input buffer -- Daniel Kochmański
  • Examples cleanup -- Jan Moringen
  • Graph formatting cleanup -- Daniel Kochmański
  • Stream panes defined in define-application-frames refactor -- admich
  • Menu bar rewrite (keyboard navigation, click to activate) -- Daniel Kochmański
  • Thread-safe execute-frame-command function -- Daniel Kochmański
  • Mirroring code simplification for clx-derived backends -- Daniel Kochmański
  • Arbitrary native transformations for sheets (i.e. zooming) -- Daniel Kochmański
  • extended-streams event matching improvements -- Jan Moringen
  • Render backend performance improvements -- death
  • drei fixes for various issues -- death
  • drei various cleanups -- Jan Moringen
  • clim-debugger improvements -- Jan Moringen
  • Manual spelling fixes and proofreading -- contrapunctus

This is not an exhaustive list of changes. For more details, please consult the repository history. Many changes I've introduced during this iteration were a subject of a careful (and time-consuming) peer review from Jan Moringen which resulted in a better code quality. Continuous integration provided by Nisar Ahmad definitely made my life simpler. I'd like to thank all contributors for their time and energy spent on improving McCLIM.

Pending work

If you are working on some exciting improvement for McCLIM which is not ready, you may make a "draft" pull request in the McCLIM repository. Currently, there are three such branches:

  • the SLIME-based backend for CLIM by Luke Gorrie

  • the dot-based graph layout extension by Eric Timmons

  • the xrender backend by Daniel Kochmański

Other than that, I've recently implemented the polygon triangulation algorithm that is meant to be used in the xrender backend (but could be reused i.e. for opengl). Currently, I'm refining the new rendering for clx (xrender). After that, I want to introduce a portable double buffering and a new repaint queue. Having these things in place after extensive testing, I want to roll out a new release of McCLIM.

Sincerely yours,
Daniel Kochmański

Planet Lisp | 13-Jul-2021 02:00

Nicolas Hafner: Testing events - July Kandria Update

Another month filled with a lot of different stuff! We have a lot of conferences coming up, there was a bunch to do for marketing, the game has seen a lot of visual and gameplay tweaks, and we've started doing a lot of direct playtesting. Finally, the music has also made a lot of progress and the first few tracks are now done!

GDC, Gamescom, and GIC

Thanks to the very generous support from Prohelvetia we're part of the Swiss Games delegation to GDC, Gamescom, and GIC. GDC is coming up this month, and we have our own virtual booth set up for that. Given that both GDC and Gamescom are virtual this year, I honestly don't really know what to expect from them, it's going to be quite different. At least GIC is in person (Poznan, Poland) so I'm really looking forward to that!

So far we've invested quite a bit of time into looking at journalists to reach out to during GDC, and who knows, perhaps we'll also be contacted by publishers or something during the event. In any case, it's going to be an exciting week, for sure.

Gamescom/Devcom are coming up in August, right before the submission deadline for the Prohelvetia grant, so that's going to be a tight squeeze, too. September is gonna be a calmer month, as we've settled for a two weeks holiday for the team during that month. Then in October it's going to be GIC for a week.


Also thanks to Prohelvetia we now have direct mentoring from Chris Zukowski with a monthly meeting for the next six months. His first advice was to focus a lot more on top of the funnel marketing (like imgur, reddit, festivals, influencers), and cut down on all the middle stuff we've been doing (like discord, mailing list, blogs, streams).

I actually quite like doing the weekly and monthly updates though, so I'm going to keep doing those. The Sunday streams have also proven quite productive for me, so I'll use them for that purpose too, rather than for any marketing intent. I am going to cut down drastically on Twitter though, as that does not seem to really bring us much of anything at all, and I'm going to stay away from Discord more in general (not just the Kandria one.).

After some brief experiments with imgur (and once even making it into most viral) I haven't returned to that yet though, as I've found it to be quite exhausting to figure out what to even post and how to post it.

We'll definitely have to try reaching out to influencers and journalists, but we're going to hold off on that until the september demo update is done, as a more polished demo should help a lot to make us look more presentable and respectable. We should also try out Reddit, but we haven't done the necessary research yet into how exactly to post on there without getting downvoted into oblivion.

The second advice Chris had for us was to do more....


We've started inviting people to do blind playtesting. The blind here meaning that they never played the game before, which gives us quite valuable insight into parts of the game that are confusing or annoying. We've done four sessions so far, and even with them being carried out over Discord, the feedback has been very useful.

I've also been inviting people for local playtesting, as being able to observe people in person is a lot better than doing it over the net. We haven't done any of that yet, but there's several appointments scheduled already. If you're near Zürich and would be up for a playtest session, please book a date!

And if you, like most, aren't close to Zürich, but would still like to help us out with testing, let me know anyway and we can arrange something over the net!


We finally finished a tutorial area for the game, giving it a proper starting point, and introducing people to the controls. It doesn't explain all of them in detail though, as I think we should instead keep the more intricate controls to challenges throughout the game, rather than trying to teach everything at once.

Designing the levels to teach the various control parts is going to be challenging, but I think ultimately it is going to be worth it, especially as it allows us to keep the tutorial in the beginning very short.

The only actual tutorial part still missing is a combat primer, for which we haven't worked out a good way of teaching it yet. I'm sure we'll find a way, though.


I took a holiday for part of this month, though otherwise it's been full steam ahead on the marketing research. I've added more reference games and potential contacts to our press & influencer document; while doing this I've taken into account the advice we got from Chris Zukowski, about not looking too closely at games with distinct differences to Kandria (e.g. Celeste has no combat; Dead Cells is a roguelike). I've also done more research on Steam short descriptions, and worked with Nick to redraft ours. It needs a few more edits, but it's nearly there.

I haven't totally forgotten the game though! With the new tutorial prologue in place, I added dialogue. This is now your first meeting with Catherine, after she reactivates you deep underground; but I've deliberately kept things short and sweet, as she guides you back up to the surface. The player has enough to think about at this stage without getting overwhelmed with dialogue; only once the tutorial segues into the settlement introduction from before, does the dialogue really begin.


Mikel's been very busy and completed 6 different versions of the region 1 track:

Which the game can use to do vertical mixing:


Let's look at the roadmap from last month with the updates from this month:

  • Build and add the tutorial sequence to the beginning of the game

  • Finish the region 1 music tracks

  • Add a fishing minigame

  • Improve the UI for the game and editor

  • Implement several editor improvements

  • Compile data on journalists, influencers, and communities

  • Implement a UI animation system

  • Implement a cutscene system

  • Polish the ProHelvetia submission material (partially done)

  • Polish and revise the combat design (partially done)

  • Implement a main menu, map, and other UI elements

  • Reach out to journalists, streamers, and other communities

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

The first three are scheduled to be done by September 1st, and so far it looks quite doable. The events are going to jumble things up a bit, but I hope that we still have enough time scheduled around to get it all done in time.

Until then be sure to check our mailing list for weekly updates, and the discord community for fan discussions!

Planet Lisp | 03-Jul-2021 21:08

Quicklisp news: June 2021 Quicklisp dist update now available

 New projects

  • cl-megolm — A copy of the python functionality provided as bindings for Olm. — MIT
  • cl-openapi-parser — OpenAPI 3.0.1 and 3.1.0 parser/validator — MIT
  • cl-opencl-utils — OpenCL utility library built on cl-opencl — GPLv3
  • cl-sse — Use sse-server + a web service to serve SSE events to a browser. — MIT
  • trivial-ed-functions — A simple compatibility layer for *ed-functions* — MIT
  • trivial-inspector-hook — A simple compatibility layer CDR6 — MIT
  • webapi — CLOS-based wrapper builder for Web APIs — BSD 2-Clause
  • whirlog — a minimal versioned log structured relational DB — MIT

Updated projects: also-alsa, april, atomics, bdef, binding-arrows, bp, chirp, cl+ssl, cl-ana, cl-collider, cl-conllu, cl-cxx-jit, cl-data-structures, cl-environments, cl-form-types, cl-gamepad, cl-gserver, cl-heredoc, cl-incognia, cl-ipfs-api2, cl-kraken, cl-maxminddb, cl-mixed, cl-mock, cl-murmurhash, cl-naive-store, cl-ntp-client, cl-opencl, cl-patterns, cl-schedule, cl-smt-lib, cl-string-generator, cl-torrents, cl-utils, cl-webkit, clack-pretend, closer-mop, cluffer, clunit2, clx, cmd, common-lisp-jupyter, conium, consfigurator, core-reader, croatoan, defmain, deploy, dexador, djula, doc, easy-routes, eclector, fiveam-asdf, fresnel, functional-trees, gendl, generic-cl, gute, harmony, herodotus, hunchentoot-multi-acceptor, hyperluminal-mem, iolib, lack, lichat-protocol, lichat-tcp-client, lispqr, markup, mcclim, md5, mito, mnas-package, mnas-string, modularize-interfaces, multiposter, neural-classifier, numerical-utilities, nyxt, origin, osmpbf, overlord, plot, plump, portal, postmodern, py4cl2, qlot, quilc, quri, qvm, re, replic, sc-extensions, sel, serapeum, shasht, shop3, sly, smart-buffer, special-functions, spinneret, st-json, static-dispatch, static-vectors, stumpwm, sxql, tailrec, tfeb-lisp-hax, tooter, trivia, trivial-with-current-source-form, trucler, vellum, vk, wasm-encoder, woo, zippy.

Removed projects: with-c-syntax.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Planet Lisp | 01-Jul-2021 23:47

Nicolas Hafner: Updates Galore - June Kandria Update

There's a lot of different news to talk about this month, so strap in!

The New Trailer

The most important thing to come out of this month is the new trailer! Check it out if you haven't yet:

I'm overall really happy with how it came together, and we all had a part in the end result. I'd also like to give a special commendation to Elissa Park who did the amazing voice over for the trailer. It was a pleasure to work together!

It's also been great to finally get some custom music by Mikel into an official part of the game. He's also been working on the first music tracks that'll be in the game, and I've been working on a music system to support horizontal mixing with the tracks. I'm very excited to get all that together into the game and see how it all feels! I hope that by next month's update we'll have a short preview of that for you.

0.1.1 Release

Meanwhile we also pushed out an update to the vertical slice release that makes use of the new linear quest system we put together. It should overall also be a lot more stable and includes many fixes for issues people reported to us. Thanks!

As always, if you want to have a look at the demo yourself, you can do so free of charge.

I think this will be the last patch we put out until September. I can't afford to backport fixes even if more bug reports come in, as the overhead of managing that is just too high. I can't just push out new versions that follow internal development either, as those are frequently in flight and have more regressions that we typically stamp out over time, but would in the meantime provide a more buggy experience.

We started working on fishing just recently!Dev Streams

I'm heavily considering doing regular weekly development streams, both to see if we can attract some more interest for the project, and to be more open about the process in general. I feel like we're already very open about everything with our weekly updates, but having an immediate insight into how the game is made is another thing entirely. I think it would be really cool to show that side of development off more often!

In order to coordinate what time would suit the most people, please fill out this Doodle form. The exact dates don't matter, just watch for the day of the week and the time. Don't worry about the name it asks for either, it won't be public!

I'll probably close the poll in a weeks, so make sure to submit an answer soon if you're interested. Streams will happen both on and, with both being reachable through the official stream page at . See you there!

Palestinian Aid Bundle

Some good folks have put together a bundle on gathering money for Palestinian aid. I'm very happy to say that our game, Eternia: Pet Whisperer is a part of this bundle!

If you want to support this cause and get a huge collection of amazing games in the process, head on over to!


This month I had a varied mix of tasks: working on the script for the new trailer; updating the quests in the vertical slice demo to work closer to our original vision; researching press and influencer contacts as we plan more of Kandria's marketing.

The trailer came out great, and I'm really happy with the voice acting that Nick produced with Elissa Park. It was a great idea Nick had to use the character of Fi as the narrator here (originally we were going to use Catherine) - her serious outlook, and reflection on the events of the story, was just what we needed to fit the epic music from Mikel, and the epic gameplay and exploration that Nick captured on screen. The whole thing just screams epic.

The quests were vastly improved too. The first quest now uses Nick's new sequencing system, so that triggers fire automatically (and more robustly) when the player arrives at the correct location, and when combat encounters are completed. The logic is also much quicker to write, so linear quests will be much faster to produce in the future. The mushroom quest also had a big refactor; now you can organically collect mushrooms out in the world, rather than going to specific enabled points. You can even sell what you find to the trader, including those poisonous black knights. It really makes the world feel more interactive. There's been general tweaks to the other quests from playtesting, and I'll continue to refine them from my own playing and players' feedback up until the Pro Helvetia submission later in the year. I'm also planning to add a couple more sidequest diversions based on the new fishing (!) minigame being added at the moment; we think a combat-focused sidequest will work well too.

Finally on the marketing side, it's been rewarding to collect tons of potential press and influencer contacts we could approach in the future. I've basically been taking games that are strong influences and have similarities to Kandria - from hugely popular games like Celeste and Dead Cells, to lesser known ones like Kunai and Armed with Wings - then cataloguing key journalists and influencers who've streamed, made videos, or written about them. This will hopefully highlight some of the right people we can contact to help spread the word about the game.


Let's look at the roadmap from last month with the updates from this month:

  • Make the combat more flashy

  • Finish a new trailer

  • Revise the quest system's handling of linear quest lines

  • Design and outline the factions for the rest of the game

  • Develop the soundscape for Kandria and start working on the tracks for region 1

  • Add a music system that can do layering and timed transitions

  • Build and add the tutorial sequence to the beginning of the game

  • Finish the region 1 music tracks

  • Reach out to journalists, streamers, and other communities

  • Polish the ProHelvetia submission material

  • Polish and revise the combat design

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

As always there's some smaller tasks that aren't in the overall roadmap. We seem to be doing pretty well keeping on track with what needs done, which is really good! It's all too easy to misjudge the time required to complete things, especially in games.

In any case, time is flying fast, and there's a lot to do. In the meantime be sure to check our mailing list for weekly updates, and the discord community for fan discussions!

Planet Lisp | 06-Jun-2021 17:49

Eric Timmons: ASDF 3.3.5 Release Candidate

ASDF has been tagged. This is a release candidate for 3.3.5. As the announcement says, please give it a spin on your setup and report any regressions. Bugs can be reported to the Gitlab issue tracker (preferred) or to the asdf-devel mailing list.


The full(ish) Changelog can be found here.

In addition to assorted bug fixes, there are several new features. Both user facing:

  • Support for package local nicknames in uiop:define-package.
  • SBCL should now be able to find function definitions nested in the with-upgradability macro.
  • package-inferred-system source files can use extensions other than .lisp.

And developer facing:

  • Building out a fairly extensive CI pipeline.

This is planned to be the last release in the 3.3 series. We are excited to get this out the door because we already have several focal points for the 3.4 series in mind, including:

  • Support for more expressive version strings and version constraints. issue draft MR.
  • A new package defining form that is explicitly designed to better tie in with package-inferred-system. issue draft MR.

Please join in the conversation if any of these features excite you, you have features you'd like to see added, or you have bugs that need to be squashed.

Planet Lisp | 04-Jun-2021 03:55

Micha? Herda: Current Common Lisp IRC situation

Because of the upheaval at Freenode, I've migrated to Libera Chat along with a bunch of other Lisp programmers. We have used that as a chance to make some small changes to the channel structure:

  • #commonlisp is the on-topic Common Lisp channel (formerly #lisp),
  • #lisp is the somewhat on-topic discussion about all Lisp dialects (formerly ##lisp),
  • the rest of the channel names should work the same.

The first two lines of the above were mentioned because #lisp on Freenode used to have a non-trivial volume of people asking questions about Scheme or Emacs Lisp due to the too-generic channel name. Naming the Common Lisp channel #commonlisp resolves this issue, at the cost of sacrificing a lucrative and attractive five-character channel name.

Planet Lisp | 01-Jun-2021 14:19

Quicklisp news: May 2021 Quicklisp dist update now available

 New projects: 

  • adopt-subcommands — Extend the Adopt command line processing library to handle nested subcommands. — MIT
  • cl-cerf — Lisp wrapper to libcerf — Public Domain
  • cl-cxx-jit — Common Lisp Cxx Interoperation — MIT
  • cl-form-types — Library for determining types of Common Lisp forms. — MIT
  • cl-incognia — Incognia API Common Lisp Client — MIT
  • cl-info — A helper to an answer a question about OS, Lisp and Everything. — BSD
  • cl-mimeparse — Library for parsing MIME types, in the spirit of, with a Common Lisp flavor. — MIT
  • cl-opencl — CFFI for OpenCL and Lisp wrapper API — Public Domain
  • cl-schedule — cl-schedule is a cron-like scheduling library in common-lisp. It subsumes and replaces traditional cron managers thanks to richer expressiveness of Lisp. — MIT
  • cl-vorbis — Bindings to stb_vorbis, a simple and free OGG/Vorbis decoding library — zlib
  • claw-olm — Thin wrapper over OLM — MIT
  • context-lite — A CLOS extension to support specializing methods on special/dynamic variables. — MIT
  • defmain — A wrapper around net.didierverna.clon which makes command line arguments parsing easier. — BSD
  • defrest — defrest: expose functions as REST webservices for ajax or other stuff — BSD
  • doc — Documentation generator, based on MGL-PAX. Allows to put documentation inside lisp files and cross-reference between different entities. — MIT
  • ec2-price-finder — Quickly find the cheapest EC2 instance that you need across regions — BSD-3-Clause
  • file-notify — Access to file change and access notification. — zlib
  • fresnel — Bidirectional translation with lenses — MIT
  • log4cl-extras — A bunch of addons to LOG4CL: JSON appender, context fields, cross-finger appender, etc. — BSD
  • mnas-package — Система @b(mnas-package) предназначена для подготовки документации, извлекаемой из asdf-систем. @begin(section) @title(Мотивация) Система @b(Codex) является достаточно удобной для выполнения документирования систем, написанных с использованием @b(Common Lisp). Она позволяет получить на выходе документацию приемлемого вида. К недостатку сустемы @b(Codex) можно отнести то, что формирование шаблона документации не выполняется автоматически. Указание на включение разделов документации, относящихся к отдельным сущностям к которым можно отнести: @begin(list) @item(системы;) @item(пакеты;) @item(классы;) @item(функции, setf-функции;) @item(обобщенные функции,методы, setf-методы;) @item(макросы;) @item(и т.д., и т.п.) @end(list) приходится формировать вручную. Этот проект пытается устранить данный недостаток системы @b(Codex) за счет определения функций и методов позволяющих: @begin(list) @item(формировать код, предназначенный для передачи в систему @b(Codex);) @item(формировать представление отдельных частей системы в виде графов.) @end(list) @end(section) — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • mnas-string — Система @b(mnas-string) предназначена для: @begin(list) @item(парсинга вещественного числа;) @item(разделения строки на подстроки;) @item(замены всех вхождений подстроки в строке;) @item(замены множественного вхождения паттерна единичным;) @item(подготовки строки в качестве аргумента для like запроса SQL;) @item(обрамления строки префиксом и постфиксом;) @item(вывода представления даты и времени в поток или строку;) @item(траслитерации строки.) @end(list) — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • osmpbf — Library to read OpenStreetMap PBF-encoded files. — MIT
  • plot — Plots for Common Lisp — MS-PL
  • scheduler — Extensible task scheduler. — BSD-2-Clause
  • speechless — A dialogue system language implementation. — zlib
  • stumpwm-sndioctl — Interface to OpenBSD's sndioctl for StumpWM. — ISC
  • vellum — Data Frames for Common Lisp — BSD simplified
  • vellum-clim — Simplistic vellum data frames viewer made with mcclim. — BSD simplified
  • vellum-csv — CSV support for Vellum Data Frames — BSD simplified
  • vellum-postmodern — Postgres support for Vellum Data Frames (via postmodern). — BSD simplified
  • vk — Common Lisp bindings for the Vulkan API. — MIT
  • wasm-encoder — Library for serializing WebAssembly modules to binary .wasm files — MIT

Updated projects: adopt, agutil, also-alsa, anypool, april, architecture.builder-protocol, async-process, audio-tag, bdef, bnf, burgled-batteries.syntax, caveman, chirp, cl+ssl, cl-ana, cl-argparse, cl-async, cl-collider, cl-covid19, cl-cuda, cl-data-frame, cl-data-structures, cl-environments, cl-forms, cl-gamepad, cl-glfw3, cl-gserver, cl-kraken, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-mixed, cl-naive-store, cl-num-utils, cl-patterns, cl-pdf, cl-prevalence, cl-rfc4251, cl-sendgrid, cl-slice, cl-smt-lib, cl-ssh-keys, cl-steamworks, cl-str, cl-tiled, cl-typesetting, cl-utils, cl-webkit, clack, clack-pretend, clath, clavier, clog, closer-mop, cmd, coleslaw, common-lisp-jupyter, configuration.options, consfigurator, croatoan, cytoscape-clj, damn-fast-priority-queue, data-frame, dataloader, defconfig, definitions, deploy, dfio, diff-match-patch, dissect, djula, dns-client, doplus, dufy, duologue, easy-routes, eclector, erudite, file-attributes, flare, fmt, functional-trees, gendl, generic-cl, golden-utils, gute, harmony, herodotus, hu.dwim.presentation, hunchenissr, hunchensocket, hunchentoot-errors, iterate, kekule-clj, lack, language-codes, lichat-protocol, lisp-stat, literate-lisp, magicffi, maiden, math, mcclim, messagebox, mnas-graph, mnas-hash-table, multiposter, mutility, named-readtables, nibbles, nodgui, numcl, numerical-utilities, nyaml, nyxt, overlord, parseq, pathname-utils, plump-sexp, plump-tex, portable-threads, postmodern, pzmq, qlot, qt-libs, rpcq, sb-cga, sc-extensions, screamer, sel, serapeum, shadow, shop3, specialized-function, spinneret, split-sequence, static-dispatch, static-vectors, stumpwm, sxql, ten, tfeb-lisp-hax, trivial-indent, trivial-timer, trivial-with-current-source-form, trucler, uax-15, vgplot, wild-package-inferred-system, with-c-syntax.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Planet Lisp | 31-May-2021 20:12

Pavel Korolev: :claw honing - Beta milestone and alien-works

Long time no see my C++ autowrapping rants. But several bindings later, :claw has reached the stage where it is ready to leave a garage and see the outside world. Since my last post, some approaches were revised, though the autowrapping process is still not fully cemented yet. I wouldn't expect :claw to be out of beta for at least a year. That doesn't mean it is unusable, but rather I cannot guarantee a stable interface and a trivial procedure for setting it up.

In other big news, :alien-works system got all required foreign libraries wrapped and integrated, including some complex and peculiar C++ ones (Skia ). Next step is to write a game, based on :alien-works framework, to see how much lispification of autowrapped systems is possible without loosing any performance and what is required for a solid game delivery.

Planet Lisp | 30-May-2021 02:00

Joe Marshall: Stupid Y operator tricks

Here is the delta function: δ = (lambda (f) (f f)). Delta takes a function and tail calls that function on itself. What happens if we apply the delta function to itself? Since the delta function is the argument, it is tail called and applied to itself. Which leads again to itself being tail called and applied to itself. We have a situation of infinite regression: the output of (δ δ) ends up being a restatement of the output of (δ δ). Now in this case, regression is infinite and there is no base case, but imagine that somehow there were a base case, or that somehow we identified a value that an infinite regression equated to. Then each stage of the infinite regression just replicates the previous stage exactly. It is like having a perfectly silvered mirror: it just replicates the image presented to it exactly. By calling delta on delta, we've arranged our perfectly silvered mirror to reflect an image of itself. This leads to the “infinite hall of mirrors” effect.

So let's tweak the delta function so that instead of perfectly replicating the infinite regression, it applies a function g around the replication: (lambda (f) (g (f f))). If we apply this modified delta function to itself, each expansion of the infinite regression ends up wrapping an application of the g around it: (g (f f)) = (g (g (f f))) = (g (g (g (f f)))) = (g (g (g (g … )))). So our modified delta function gives us a nested infinite regression of applications of g. This is like our perfectly silvered mirror, but now the reflected image isn't mirrored exactly: we've put a frame on the mirror. When we arrange for the mirror to reflect itself, each nested reflection also has an image of the frame around the reflection, so we get a set of infinitely nested frames.

An infinite regression of (g (g (g (g … )))) is confusing. What does it mean? We can untangle this by unwrapping an application. (g (g (g (g … )))) is just a call to g. The argument to that call is weird, but we're just calling (g ). The result of the infinite regression (g (g (g (g … )))) is simply the result of the outermost call to g. We can use this to build a recursive function. ;; If factorial = (g (g (g (g … )))), then ;; factorial = (g factorial), where (defun g (factorial) (lambda (x) (if (zerop x) 1 (* x (funcall factorial (- x 1))))))The value returned by an inner invocation of g is the value that will be funcalled in the altenative branch of the conditional.

Y is defined thus:

Y = λg.(λf.g(f f))(λf.g(f f))

A straightforward implementation attempt would be;; Non working y operator (defun y (g) (let ((d (lambda (f) (funcall g (funcall f f))))) (funcall d d)))but since lisp is a call-by-value language, it will attempt to (funcall f f) before funcalling g, and this will cause runaway recursion. We can avoid the runaway recursion by delaying the (funcall f f) with a strategically placed thunk;; Call-by-value y operator ;; returns (g (lambda () (g (lambda () (g (lambda () … )))))) (defun y (g) (let ((d (lambda (f) (funcall g (lambda () (funcall f f)))))) (funcall d d)))Since the recursion is now wrapped in a thunk, we have to funcall the thunk to force the recursive call. Here is an example where we see that:* (funcall (Y (lambda (thunk) (lambda (x) (if (zerop x) 1 (* x (funcall (funcall thunk) (- x 1))))))) 6) 720the (funcall thunk) invokes the thunk in order to get the actual recursive function, which we when then funcall on (- x 1).

By wrapping the self-application with a thunk, we've made the call site where we use the thunk more complicated. We can clean that up by wrapping the call to the thunk in something nicer:* (funcall (y (lambda (thunk) (flet ((factorial (&rest args) (apply (funcall thunk) args))) (lambda (x) (if (zerop x) 1 (* x (factorial (- x 1)))))))) 6) 720And we can even go so far as to hoist that wrapper back up into the definiton of y(defun y1 (g) (let ((d (lambda (f) (funcall g (lambda (&rest args) (apply (funcall f f) args)))))) (funcall d d))) * (funcall (y1 (lambda (factorial) (lambda (x) (if (zerop x) 1 (* x (funcall factorial x)))))) 6) 720y1 is an alternative formulation of the Y operator where we've η-expanded the recursive call to avoid the runaway recursion.

The η-expanded version of the applicative order Y operator has the advantage that it is convenient for defining recursive functions. The thunkified version is less convenient because you have to force the thunk before using it, but it allows you to use the Y operator to define recursive data structures as well as functions:(Y (lambda (delayed-ones) (cons-stream 1 (delayed-ones)))) {1 …}

The argument to the thunkified Y operator is itself a procedure of one argument, the thunk. Y returns the result of calling its argument. Y should return a procedure, so the argument to Y should return a procedure. But it doesn't have to immediately return a procedure, it just has to eventually return a procedure, so we could, for example, print something before returning the procedure:* (funcall (Y (lambda (thunk) (format t "~%Returning a procedure") (lambda (x) (if (zerop x) 1 (* x (funcall (funcall thunk) (- x 1))))))) 6) Returning a procedure Returning a procedure Returning a procedure Returning a procedure Returning a procedure Returning a procedure 720There is one caveat. You must be able to return the procedure without attempting to make the recursive call.

Let's transform the returned function before returning it by applying an arbitrary function h to it:(Y (lambda (thunk) (h (lambda (x) (if (zerop x) 1 … )))))Ok, so now when we (funcall thunk) we don't get what we want, we've got an invocation of h around it. If we have an inverse to h, h-1, available, we can undo it:(y (lambda (thunk) (h (lambda (x) (if (zerop x) 1 (* (funcall (h-1 (funcall thunk)) (- x 1)))))))) As a concrete example, we return a list and at the call site we extract the first element of that list before calling it:* (funcall (car (y (lambda (thunk) (list (lambda (x) (if (zerop x) 1 (* x (funcall (car (funcall thunk)) (- x 1)))))))) 6) 720So we can return a list of mutually recursive functions:(y (lambda (thunk) (list ;; even? (lambda (n) (or (zerop n) (funcall (cadr (funcall thunk)) (- n 1)))) ;; odd? (lambda (n) (and (not (zerop n)) (funcall (car (funcall thunk)) (- n 1)))) )))If we use the η-expanded version of the Y operator, then we can adapt it to expect a list of mutually recursive functions on the recursive call:(defun y* (&rest g-list) (let ((d (lambda (f) (map 'list (lambda (g) (lambda (&rest args) (apply (apply g (funcall f f)) args))) g-list)))) (funcall d d)))which we could use like this:* (let ((eo (y* (lambda (even? odd?) (declare (ignore even?)) (lambda (n) (or (zerop n) (funcall odd? (- n 1))))) (lambda (even? odd?) (declare (ignore odd?)) (lambda (n) (and (not (zerop n)) (funcall even? (- n 1)))))))) (let ((even? (car eo)) (odd? (cadr eo))) (do ((i 0 (+ i 1))) ((>= i 5)) (format t "~%~d, ~s ~s" i (funcall even? i) (funcall odd? i))))))) 0, T NIL 1, NIL T 2, T NIL 3, NIL T 4, T NILInstead of returning a list of mutually recursive functions, we could return them as multiple values. We just have to be expecting multiple values at the call site:(defun y* (&rest gs) (let ((d (lambda (f) (apply #'values (map 'list (lambda (g) (lambda (&rest args) (apply (multiple-value-call g (funcall f f)) args))) gs))))) (funcall d d)))

MIT Scheme used to have a construct called a named lambda. A named lambda has an extra first argument that is automatically filled in with the function itself. So during evaluation of the body of a named lambda, the name is bound to the named lambda, enabling the function to call itself recursively:(defmacro named-lambda ((name &rest args) &body body) `(y1 (lambda (,name) (lambda ,args ,@body)))) * (funcall (named-lambda (factorial x) (if (zerop x) 1 (* x (funcall factorial (- x 1))))) 6) 720This leads us to named let expressions. In a named let, the implicit lambda that performs the let bindings is a named lambda. Using that name to invoke the lambda on a different set of arguments is like recursively re-doing the let.* (named-let fact ((x 6)) (if (zerop x) 1 (* x (funcall fact (- x 1))))) 720

In Scheme, you use letrec to define recursive or mutually recursive procedures. Internal definitions expand into an appropriate letrec. letrec achieves the necessary circularity not through the Y operator, but through side effects. It is hard to tell the difference, but there is a difference. Using the Y operator would allow you to have recursion, but avoid the implicit side effects in a letrec.

Oleg Kiselyov has more to say about the Y operator at

Planet Lisp | 22-May-2021 00:07

Joe Marshall: β-conversion

If you have an expression that is an application, and the operator of the application is a lambda expression, then you can β-reduce the application by substituting the arguments of the application for the bound variables of the lambda within the body of the lambda.(defun beta (expression if-reduced if-not-reduced) (if (application? expression) (let ((operator (application-operator expression)) (operands (application-operands expression))) (if (lambda? operator) (let ((bound-variables (lambda-bound-variables operator)) (body (lambda-body operator))) (if (same-length? bound-variables operands) (funcall if-reduced (xsubst body (table/extend* (table/empty) bound-variables operands))) (funcall if-not-reduced))) (funcall if-not-reduced))) (funcall if-not-reduced))) * (beta '((lambda (x y) (lambda (z) (* x y z))) a (+ z 3)) #'identity (constantly nil)) (LAMBDA (#:Z460) (* A (+ Z 3) #:Z460))

A large, complex expression may or may not have subexpressions that can be β-reduced. If neither an expression or any of its subexpressions can be β-reduced, then we say the expression is in “β-normal form”. We may be able to reduce an expression to β-normal form by β-reducing where possible. A β-reduction can introduce further reducible expressions if we substitute a lambda expression for a symbol in operator position, so reducing to β-normal form is an iterative process where we continue to reduce any reducible expressions that arise from substitution.

(defun beta-normalize-step (expression) (expression-dispatch expression ;; case application (lambda (subexpressions) ;; Normal order reduction ;; First, try to beta reduce the outermost application, ;; otherwise, recursively descend the subexpressions, working ;; from left to right. (beta expression #'identity (lambda () (labels ((l (subexpressions) (if (null subexpressions) '() (let ((new-sub (beta-normalize-step (car subexpressions)))) (if (eq new-sub (car subexpressions)) (let ((new-tail (l (cdr subexpressions)))) (if (eq new-tail (cdr subexpressions)) subexpressions (cons (car subexpressions) new-tail))) (cons new-sub (cdr subexpressions))))))) (let ((new-subexpressions (l subexpressions))) (if (eq new-subexpressions subexpressions) expression (make-application new-subexpressions))))))) ;; case lambda (lambda (bound-variables body) (let ((new-body (beta-normalize-step body))) (if (eql new-body body) expression (make-lambda bound-variables new-body)))) ;; case symbol (constantly expression))) ;;; A normalized expression is a fixed point of the ;;; beta-normalize-step function. (defun beta-normalize (expression) (do ((expression expression (beta-normalize-step expression)) (expression1 '() expression) (count 0 (+ count 1))) ((eq expression expression1) (format t "~%~d beta reductions" (- count 1)) expression)))

You can compute just by using β-reduction. Here we construct an expression that reduces to the factorial of 3. We only have β-reduction, so we have to encode numbers with Church encoding.(defun test-form () (let ((table (table/extend* (table/empty) '(one three * pred zero? y) '( ;; Church numeral one (lambda (f) (lambda (x) (f x))) ;; Church numeral three (lambda (f) (lambda (x) (f (f (f x))))) ;; * (multiply Church numerals) (lambda (m n) (lambda (f) (m (n f)))) ;; pred (subtract 1 from Church numeral) (lambda (n) (lambda (f) (lambda (x) (((n (lambda (g) (lambda (h) (h (g f))))) (lambda (u) x)) (lambda (u) u))))) ;; zero? (test if Church numeral is zero) (lambda (n t f) ((n (lambda (x) f)) t)) ;; Y operator for recursion (lambda (f) ((lambda (x) (f (x x))) (lambda (x) (f (x x))))) ))) (expr '((lambda (factorial) (factorial three)) (y (lambda (fact) (lambda (x) (zero? x one (* (fact (pred x)) x)))))))) (xsubst expr table))) * (test-form) ((LAMBDA (FACTORIAL) (FACTORIAL (LAMBDA (F) (LAMBDA (X) (F (F (F X))))))) ((LAMBDA (F) ((LAMBDA (X) (F (X X))) (LAMBDA (X) (F (X X))))) (LAMBDA (FACT) (LAMBDA (X) ((LAMBDA (N T F) ((N (LAMBDA (X) F)) T)) X (LAMBDA (F) (LAMBDA (X) (F X))) ((LAMBDA (M N) (LAMBDA (F) (M (N F)))) (FACT ((LAMBDA (N) (LAMBDA (F) (LAMBDA (X) (((N (LAMBDA (G) (LAMBDA (H) (H (G F))))) (LAMBDA (U) X)) (LAMBDA (U) U))))) X)) X)))))) * (beta-normalize (test-form)) 127 beta reductions (LAMBDA (F) (LAMBDA (X) (F (F (F (F (F (F X))))))))

This is the Church numeral for 6.

I find it pretty amazing that we can bootstrap ourselves up to arithmetic just by repeatedly β-reducing where we can. It doesn't seem like we're actually doing any work. We're just replacing names with what they stand for.

The β-substitution above replaces all the bound variables with their arguments if there is the correct number of arguments. One could easily implement a partial β-substitution that replaced only some of the bound variables. You'd still have an application, but some of the bound variables in the lambda would be eliminated and the corresponding argument would be removed.

Planet Lisp | 15-May-2021 19:51

Joe Marshall: Substitution

In McCarthy's early papers on Lisp, he notes that he needs a modified version of subst which needs to be aware of quoted expressions (and avoid substituting within them). He would also need a subst that was aware of lambda expressions. It would have to avoid substituting within the lambda if the name substituted matches one of the bound variables. To be useful for evaluation, it will have to deal with accidental variable capture when substituting within a lambda.

The root problem is that expressions are actually structured objects, but we are working with the list representation of those objects. Instead of substituting by operating on objects, we substitute on the list representation. We have to arrange for the syntactic substitution on the list representation to preserve the semantics of substitution on the objects they represent.

In the substitution model, we take a symbolic expression and replace some of the atoms in the expression with other expressions. We first need a way to discriminate between the different kinds of expressions. An expression is either an atomic symbol, or a list of expressions called an application. There are no other kinds of expressions. (defun expression-dispatch (expression if-symbol if-application) (cond ((symbolp expression) (funcall if-symbol expression)) ((consp expression) (funcall if-application expression)) (t (error "~s is not an expression." expression)))) Substitution is straightforward:(defun xsubst (table expression) (expression-dispatch expression (lambda (symbol) (funcall table symbol #'identity (constantly symbol))) (lambda (subexpressions) (map 'list (lambda (subexpression) (xsubst table subexpression)) subexpressions)))) * (let ((table (table/extend (table/empty) 'x '(* a 42)))) (xsubst table '(+ x y))) (+ (* A 42) Y)We need a table of multiple substitutions so that we can substitute in parallel:* (let ((table (table/extend (table/extend (table/empty) 'x 'y) 'y 'x))) (xsubst table '(+ x y))) (+ Y X)

So far, so good. Let's add lambda expressions. First, we need to add a new expression kind:(defun expression-dispatch (expression if-symbol if-lambda if-application) (cond ((symbolp expression) (funcall if-symbol expression)) ((consp expression) (cond ((eq (car expression) 'lambda) (funcall if-lambda (cadr expression) (caddr expression))) (t (funcall if-application expression)))) (t (error "~s is not an expression." expression))))

Substitution within a lambda expression is a bit tricky. First, you don't want to substitute a symbol if it is one of the bound variables of the lambda expression. Second, substituting a symbol may introduce more symbols. We don't want the new symbols to be accidentally captured by the bound variables in the lambda. If any new symbol has the same name as a bound variable, we have to rename the bound variable (and all its occurrances) to a fresh name so that it doesn't capture the new symbol being introduced. We'll need a helper function(defun free-variables (expression) (expression-dispatch expression (lambda (symbol) (list symbol)) (lambda (bound-variables body) (set-difference (free-variables body) bound-variables)) (lambda (subexpressions) (fold-left #'union '() (map 'list #'free-variables subexpressions))))) Now when we substitute within a lambda, we first find each free variable in the lambda, look it up in the substitution table, and collect the free variables of the substituted value:(map 'list (lambda (var) (funcall table var #'free-variables (constantly '()))) (free-variables expression))This gives us the new free variables for each substitution. The union of all of these is the set of all the new free variables (fold-left #'union '() (map 'list (lambda (var) (funcall table var #'free-variables (constantly '()))) (free-variables expression))) We have to rename the bound variables that are in this set: (intersection bound-variables (fold-left #'union '() (map 'list (lambda (var) (funcall table var #'free-variables (constantly '()))) (free-variables expression)))) So we make a little table for renaming: (defun make-alpha-table (variables) (fold-left (lambda (table variable) (table/extend table variable (gensym (symbol-name variable)))) (table/empty) variables)) (let ((alpha-table (make-alpha-table (intersection bound-variables (fold-left #'union '() (map 'list (lambda (var) (funcall table var #'free-variables (constantly '()))) (free-variables expression))))))) …) We rename the bound variables as necessary: (make-lambda (map 'list (lambda (symbol) (funcall alpha-table symbol #'identity (constantly symbol))) bound-variables) …) Finally, we redact the bound variables from the substitution table and append the alpha-table to make the substitutions we need for the lambda body (make-lambda (map 'list (lambda (symbol) (funcall alpha-table symbol #'identity (constantly symbol))) bound-variables) (xsubst (table/append alpha-table (table/redact* table bound-variables)) body)))) The entire definition of xsubst is now this: (defun xsubst (table expression) (expression-dispatch expression (lambda (symbol) (funcall table symbol #'identity (constantly symbol))) (lambda (bound-variables body) (let ((alpha-table (make-alpha-table (intersection bound-variables (fold-left #'union '() (map 'list (lambda (var) (funcall table var #'free-variables (constantly '()))) (set-difference (free-variables body) bound-variables))))))) (make-lambda (map 'list (lambda (symbol) (funcall alpha-table symbol #'identity (constantly symbol))) bound-variables) (xsubst (table/append alpha-table (table/redact* table bound-variables)) body)))) (lambda (subexpressions) (make-application (map 'list (lambda (subexpression) (xsubst table subexpression)) subexpressions))))) This is certainly more complicated than simple substitution, but we can see it does the right thing here: * (xsubst (table/extend (table/empty) 'x '(* a y)) '(lambda (y) (+ x y))) (LAMBDA (#:Y234) (+ (* A Y) #:Y234))

It should be obvious how to add quoted forms. This would require adding a new kind of expression to expression-dispatch and a new handling clause in xsubst that avoids substitution.

I'm not completely happy with how we've added lambda expressions to the expression syntax. Using the symbol lambda as a syntactic marker for lambda expressions causes problems if we also want to use that symbol as an argument or variable. Initially, it seems reasonable to be able to name an argument “lambda”. Within the body of the function, references to the variable lambda would refer to that argument. But what about references in the operator position? By defining lambda expressions as three element lists beginning with the symbol lambda we've made it ambiguous with two-argument applications whose operator is the variable lambda. We have to resolve this ambiguity. The current behavior is that we always interpret the symbol lambda as a syntactic marker so you simply cannot use a variable named lambda as a function.

Planet Lisp | 10-May-2021 13:42

Nicolas Hafner: Eternia release and updated plans - May Kandria Update

Another hectic month gone by. Getting Eternia: Pet Whisperer done and published on Steam in such a short time was no small feat. There were a few hurdles along the way as well, especially when it came to getting it ready for Steam, but overall I'm really glad we managed to get it out. Having a bona fide released title is amazing!

Most of the trouble with getting it put on Steam was the manual review process they have. It turns out, real people actually check your builds before you're allowed to sell the game. And not only do they check whether it launches, they also do other tests like whether gamepad support works, whether captions are included, etc. It's a lot more extensive than I had expected, which is really nice!

Unfortunately I was also confused by a couple of options, and misunderstood others, so I had to go through the review a bunch of times, which caused a bunch of stress for me and ultimately delayed the release of Eternia by two days. Welp! At least now I know for the future that I'll have to start with the Steam review process well in advance of the actual release. With Eternia everything was so back to back that there really wasn't any more time for that than we had.

The days since the release haven't been much less hectic either. There were a few bugs that had to be ironed out that prevented people from playing the game. Most notably:

  • People with surround headphones could not play as those headphones announce themselves as 7.1 surround systems, which I didn't expect anyone would have.

  • Some bad versions of shared libraries snuck themselves into the release builds on Linux.

  • The save menu was bugged due to a regression in the UI library

  • Avast antivirus causes random Windows exceptions in the game (this is not fixed and I don't know how to fix it).

There were also some minor content fixes for typos and such along the way. In any case, the automated crash report system I instated for Kandria helped a lot as it told me when people were having issues. Still, it was heartbreaking every time I got a report, as knowing people can't even play the game and are likely very frustrated or disappointed stings a lot.

I really hope the Kandria release will be more smooth and less prone to issues now that we have a better handle on the process. Still, the amount of issues that are only uncovered by having people with various PC setups try to run your game is... well, it's not great. One of the many perils of the PC platform, I suppose.

It hasn't yet been a full week so I don't really want to go into the statistics of the Eternia release, but I'll be sure to talk about that stuff in the next weekly update!

Anyway, aside from supporting the Eternia review and taking care of the marketing around that, I also did some planning for Kandria. I asked ProHelvetia, and it turns out the submission deadline for the grant is going to be 1st of September of this year. This gives us roughly four months to graft a tutorial onto the vertical slice, and polish the fuck out of it to make it look as good as possible.

A first step in that is to iron out egregious bugs that were reported from people playing the public vertical slice, and making a new trailer that shows off all the new content we made. I've now started working on both of those things and we should be able to finish the new trailer in the next week.

I initially also wanted to put out a patch for the vertical slice already, but while we did fix a number of bugs and there's a few improvements, I'd rather wait until the rest of the other known and egregious bugs are ironed out at least. Regardless though, if you're on the Discord or the mailing list, we'll let you know when that patch hits!


In the last monthly I announced that you'll get to hear the pieces the three finalists put together, so here they are! I hope you enjoy listening to them!

While all three of them did a fantastic job and the decision was a hard one, we ultimately went with...


Hey everyone, my name is Mikel - nice to meet you! I'll be working as Kandria's composer from now on, which means I'll bang some notes on the piano and hope they sound good.

In my first week as part of the team, I've been working side by side Shinmera on the game's new trailer. Taking inspiration from OSTs like Astral Chain and Octopath Traveller (as well as using our secret weapon, Julie Elven), we're cooking up something real good!

If you have any suggestions, any soundtracks you'd like me to check out, or any weird instruments/sounds you demand I incorporate in the music, please let me know on the Discord! It'd be great to have some feedback from you too :)

Ah yes, time for shameless plug: if you'd like to listen to some other games I'm working on, feel free to peruse Underdesert (rogue-lite dungeon FPS game), Waves of Steel (naval battle!), Cryptonom (if you like monster RPGs), or Heredity (good ol' fantasy ARPG).

You can also check out my website for more deets.

Look forward to showing you what's next!

What's next

Here's a rough roadmap for the rest of the year:

  • Make the combat more flashy

  • Finish a new trailer

  • Design and outline the factions for the rest of the game

  • Develop the soundscape for Kandria and start working on the tracks for region 1

  • Add a music system that can do layering and timed transitions

  • Build and add the tutorial sequence to the beginning of the game

  • Polish the ProHelvetia submission material

  • Polish and revise the combat design

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

Of course the size of the items varies a lot, but hopefully we should be able to begin work on the horizontal slice by November. If we extrapolate the time it took for the vertical slice and cut some content, this should leave us with enough time to finish by early 2023... provided we don't run outta money.

Anyway, that's why the grant is the primary target for now. Though we were also accepted by ProHelvetia for GDC Online and the SwissNex US Game Industry Week, so that'll give us some more opportunities to look for a publisher. Having a new trailer for that should definitely help a lot!

Well, see you next time then! Remember that there's a mailing list with weekly updates, a discord for the community, and my twitter with short videos and images on the development! Oh, and do check out Eternia if you haven't yet. We've gotten some really nice reviews for it, which has been a joy to see!

Planet Lisp | 09-May-2021 12:31

Joe Marshall: Lightweight table

You don't need a data structure to make a lookup table. You can make a table just out of the lookup function. In this example, we start with a continuation passing style lookup function:lookup (key if-found if-not-found) Invokes (funcall if-found value) if key is in the table, invokes (funcall if-not-found) otherwise.An empty table just invokes the if-not-found continuation:(defun table/empty () (lambda (key if-found if-not-found) (declare (ignore key if-found)) (funcall if-not-found)))A table can be extended by wrapping it:(defun table/extend (table key* value) (lambda (key if-found if-not-found) (if (eql key key*) (funcall if-found value) (funcall table key if-found if-not-found))))So let's try it out:(defvar *table-1* (table/extend (table/extend (table/empty) 'foo 42) 'bar 69)) * (funcall *table-1* 'foo #'identity (constantly 'not-found)) 42 * (funcall *table-1* 'quux #'identity (constantly 'not-found)) NOT-FOUNDYou can also redact an entry from a table by wrapping the table:(defun table/redact (table redacted) (lambda (key if-found if-not-found) (if (eql key redacted) (funcall if-not-found) (funcall table key if-found if-not-found)))) (defvar *table-2* (table/redact *table-1* 'foo)) * (funcall *table-2* 'foo #'identity (constantly 'not-found)) NOT-FOUND

Are there any advantages to implementing a table in this curious manner? Building a table by nesting a series of lookup steps leads to a linear lookup in linear space, so this kind of table should be more or less comparable to an alist for individual entries. Unlike a traditional table made with a data structure, you cannot enumerate the keys and values in the table. On the other hand, you gain the ability to map keys to values without having to enumerate the keys:(defun table/bind-predicate (table predicate value) (lambda (key if-found if-not-found) (if (funcall predicate key) (funcall if-found value) (funcall table key if-found if-not-found)))) ;;; bind all even numbers to the symbol 'EVEN (defvar *table-3* (table/bind-predicate *table-2* (lambda (n) (and (numberp n) (evenp n))) 'even)) * (funcall *table-3* 6 #'identity (constantly 'not-found)) EVENOr you can add a default value to an existing table:(defun table/add-default (table default-value) (lambda (key if-found if-not-found) (declare (ignore if-not-found)) (funcall table key if-found (lambda () (funcall if-found default-value))))) (defvar *table-4* (table/add-default *table-3* 'default)) * (funcall *table-4* 'bar #'identity (constantly 'not-found)) 69 * (funcall *table-4* 'xyzzy #'identity (constantly 'not-found)) DEFAULT

Perhaps the biggest disadvantage of this implementation is the difficulty in inspecting a table.* *table-4* #We can use the object inspector to peek inside the closure and maybe sleuth out what this table is made out of, but it isn't just an alist where we can print out the entries.

So far, we've defined a table as being a procedure with the (key if-found if-not-found) signature, but we can flip this around and say that any procedure with a (key if-found if-not-found) signature can be thought of as a table. For example, a regular expression matcher could be considered to be a table of strings (if that were a more useful model).

Planet Lisp | 03-May-2021 16:35

Wimpie Nortje: User feedback during long running external processes.

Sometimes it may be necessary to execute an external command that takes a long time to complete, long enough that the user needs visual feedback while it is running to show that the process is still alive.

UIOP provides fantastic tools for running external commands in a portable manner but it was not obvious to me how to show the external command's output to the user while it was still busy. I also wanted to execute the external command in a synchronous fashion, i.e. my lisp application must wait for the external command to finish before continuing. The need for synchronicity sent me down the wrong path of using the synchronous uiop:run-program. It only returns when the external command has finished, which means you can only process the command output once it is completed.

I eventually realised I should use uiop:launch-program, the asynchronous version, and I came up with the following solution. In the example below the (ping) function pings a website and prints the results as they become available. Afterwards it returns the exit code of the ping command.

(defun ping () (let (proc out exit-code) (unwind-protect (progn (setf proc (uiop:launch-program (list "ping" "-c" "5" "") :ignore-error-status t :output :stream)) (setf out (uiop:process-info-output proc)) (iter (while (uiop:process-alive-p proc)) (iter (while (listen out)) (write-char (read-char out) *STANDARD-OUTPUT*)) ;; ... Maybe do something here (sleep 0.5)) (uiop:copy-stream-to-stream out *STANDARD-OUTPUT* :linewise t)) (setf exit-code (uiop:wait-process proc)) (uiop:close-streams proc)) exit-code))

In the first example the command's output is shown to the user but it is not processed in any other way. If you need to do some extra processing on it after completion then the next example should provide a good starting point.

(defun ping-processing () (let (proc out exit-code output) (with-open-stream (output-stream (make-string-output-stream)) (with-open-stream (broadcast-stream (make-broadcast-stream *STANDARD-OUTPUT* output-stream)) (unwind-protect (progn (setf proc (uiop:launch-program (list "ping" "-c" "5" "") :ignore-error-status t :output :stream)) (setf out (uiop:process-info-output proc)) (iter (while (uiop:process-alive-p proc)) (iter (while (listen out)) (write-char (read-char out) broadcast-stream)) (sleep 0.5)) (uiop:copy-stream-to-stream out broadcast-stream :linewise t))) (setf exit-code (uiop:wait-process proc)) (uiop:close-streams proc) (setf output (get-output-stream-string output-stream)) ;; ... process output here exit-code))))

Planet Lisp | 02-May-2021 02:00

Eric Timmons: New Project: adopt-subcommands

I have just released a new project: adopt-subcommands. This project extends the excellent Adopt library with support for arbitrarily nested subcommands. See the README for more information.

I have just asked that it be included in Quicklisp, so hopefully it will be present in the next QL release.


After bouncing around between CL command line processing libraries for a while (including CLON, unix-opts, and another I forget), I tried Adopt shortly after it was released and immediately got hooked. It was just super easy to use and it used functions as the default way to define interfaces (which encouraged reuse and programatic generation). To be fair, other libraries have similar features, but there's just something about Adopt that clicked with me.

The big thing missing for me was easy support for subcommands. Libraries like CLON support that out of the box, but (at least in CLON's case) required that you completely specify every option at the terminal nodes. I wanted to define a folder-like hierarchy where options defined at some level get automatically applied to everything below it as well.

I was able to hack together a solution using Adopt, but I built it in a hurry and it was definitely not fit for general consumption. Since then, I was inspired by Steve Losh's Reddit comment giving an example of how he'd make a simple subcommand CLI parser using Adopt. His post made me realize I missed the existence of the adopt:treat-as-argument restart (d'oh!) and after that, all the pieces fell into place on how to cleanly rewrite my solution. This library is the result!

Nifty Features

I work with a number of programs written in golang that (IMO) have atrocious CLI handling (like helmfile and Kaniko). Maybe it's the individual program's fault, but it's endemic enough that I suspect whatever CLI parser the golang community has landed on is just terrible.^1

For instance, position of the options matters. "Global" options have to come before the subcommand is even specified. So foo --filter=hi run can have a completely different meaning than foo run --filter=hi. Additionally, some of the subcommand style programs I work with don't print all the options if you ask for help, they only print the options associated with the most recent subcommand.

Needless to say, I made sure adopt-subcommands didn't exhibit any of these behaviors. As this library is parsing the command line, it builds up a path of the folders (and eventually the terminal command) it passes through. This path can be passed to adopt-subcommands:print-help to print a help string that includes all the associated options. Additionally, options can come at any point after the subcommand that defines them.^2

There are two major difference between Adopt and this library:

  1. You need to provide functions when you define a terminal subcommand. This function will be called with the results of the parsing when you dispatch.
  2. The dispatch function has a keyword argument :print-help-and-exit. If you provide the symbol naming your help option, then this library will automatically print the help and exit if that option is specified, and after doing as much parsing as possible.

Give it a try and let me know of any issues that you find!

1: although, it wouldn't surprise me if some gophers started arguing that it's totally on purpose, is actually quite elegant, blah blah blah. I kid. I'm just salty about golang's lack of conditions and insistence on using tabs and let that color my take on the entire language.

2: It would be possible to let them come before as well, but at the risk of introducing ambiguity. It's not clear to me that it's worth it.

Planet Lisp | 22-Apr-2021 04:00

Joe Marshall: η-conversion and tail recursion

Consider this lambda expression: (lambda (x) (sqrt x)). This function simply calls sqrt on its argument and returns whatever sqrt returns. There is no argument you could provide to this function that would cause it to return a different result than you would get from calling sqrt directly. We say that this function and the sqrt function are extensionally equal. We can replace this lambda expression with a literal reference to the sqrt function without changing the value produced by our code.

You can go the other way, too. If you find a literal reference to a function, you can replace it with a lambda expression that calls the function. This is η-conversion. η-reduction is removing an unnecessary lambda wrapper, η-expansion is introducting one.

η-conversion comes with caveats. First, it only works on functions. If I have a string "foo", and I attempt to η-expand this into (lambda (x) ("foo" x)), I get nonsense. Second, a reduction strategy that incorporates η-reduction can be weaker than one that does not. Consider this expression: (lambda (x) ((compute-f) x)). We can η-reduce this to (compute-f), but this makes a subtle difference. When wrapped with the lambda, (compute-f) is evaluated just before it is applied to x. In fact, we won't call (compute-f) unless we invoke the result of the lambda expression somewhere. However, once η-reduced, (compute-f) is evaluated at the point the original lambda was evaluated, which can be quite a bit earlier.

When a function foo calls another function bar as a subproblem, an implicit continuation is passed to bar. bar invokes this continuation on the return value that it computes. We can characterize this continuation like this:kbar = (lambda (return-value) (kfoo (finish-foo return-value))) this just says that when bar returns, we'll finish running the code in foo and further continue by invoking the continuation supplied to foo.

If foo makes a tail call to bar, then foo is just returning what bar computes. There is no computation for foo to finish, so the continuation is justkbar = (lambda (return-value) (kfoo return-value))But this η-reduces to just kfoo, so we don't have to allocate a new trivial continuation when foo tail calls bar, we can just pass along the continuation that was passed to foo.

Tail recursion is equivalent to η-reducing the implicit continuations to functions where possible. A Scheme aficionado might prefer to say avoiding η-expanding where unnecessary.

This is a mathematical curiosity, but does it have practical significance? If you're programming in continuation passing style, you should be careful to η-reduce (or avoid η-expanding) your code.

Years ago I was writing an interpreter for the REBOL language. I was getting frustrated trying to make it tail recursive. I kept finding places in the interpreter where the REBOL source code was making a tail call, but the interpreter itself wasn't, so the stack would grow without bound. I decided to investigate the problem by rewriting the interpreter in continuation passing style and seeing why I couldn't η-convert the tail calls. Once in CPS, I could see that eval took two continuations and I could achieve tail recursion by η-reducing one of them.

Planet Lisp | 19-Apr-2021 20:28

Wimpie Nortje: Process sub-command style command line options with Adopt.

How to process sub-command style command line arguments is a question that arises more and more. Many of the basic option handling libraries can not handle this at all, or they make it very difficult to do so.

One of the newer libraries in the option processing field is Adopt by Steve Losh. It was not designed to handle sub-commands but it is in fact very capable to do this without having to jump through too many hoops.

In a Reddit thread someone asked if Adopt can handle sub-command processing and Steve answered with the following example:

(eval-when (:compile-toplevel :load-toplevel :execute) (ql:quickload '(:adopt) :silent t)) (defpackage :subex (:use :cl) (:export :toplevel *ui*)) (in-package :subex) ;;;; Global Options and UI ---------------------------------------------------- (defparameter *o/help* (adopt:make-option 'help :long "help" :help "display help and exit" :reduce (constantly t))) (defparameter *o/version* (adopt:make-option 'version :long "version" :help "display version and exit" :reduce (constantly t))) (defparameter *ui/main* (adopt:make-interface :name "subex" :usage "[subcommand] [options]" :help "subcommand example program" :summary "an example program that uses subcommands" :contents (list *o/help* *o/version*))) (defparameter *ui* *ui/main*) ;;;; Subcommand Foo ----------------------------------------------------------- (defparameter *o/foo/a* (adopt:make-option 'a :result-key 'mode :short #\a :help "run foo in mode A" :reduce (constantly :a))) (defparameter *o/foo/b* (adopt:make-option 'b :result-key 'mode :short #\b :help "run foo in mode B" :reduce (constantly :b))) (defparameter *ui/foo* (adopt:make-interface :name "subex foo" :usage "foo [-a|-b]" :summary "foo some things" :help "foo some things" :contents (list *o/foo/a* *o/foo/b*))) (defun run/foo (mode) (format t "Running foo in ~A mode.~%" mode)) ;;;; Subcommand Bar ----------------------------------------------------------- (defparameter *o/bar/meow* (adopt:make-option 'meow :long "meow" :help "meow loudly after each step" :reduce (constantly t))) (defparameter *ui/bar* (adopt:make-interface :name "subex bar" :usage "bar [--meow] FILE..." :summary "bar some files" :help "bar some files" :contents (list *o/bar/meow*))) (defun run/bar (paths meow?) (dolist (p paths) (format t "Bar-ing ~A.~%" p) (when meow? (write-line "meow.")))) ;;;; Toplevel ----------------------------------------------------------------- (defun toplevel/foo (args) (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/foo* args) (unless (null arguments) (error "Foo does not take arguments, got ~S" arguments)) (run/foo (gethash 'mode options)))) (defun toplevel/bar (args) (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/bar* args) (when (null arguments) (error "Bar requires arguments, got none.")) (run/bar arguments (gethash 'meow options)))) (defun lookup-subcommand (string) (cond ((null string) (values nil *ui/main*)) ((string= string "foo") (values #'toplevel/foo *ui/foo*)) ((string= string "bar") (values #'toplevel/bar *ui/bar*)) (t (error "Unknown subcommand ~S" string)))) (defun toplevel () (sb-ext:disable-debugger) (multiple-value-bind (arguments global-options) (handler-bind ((adopt:unrecognized-option 'adopt:treat-as-argument)) (adopt:parse-options *ui/main*)) (when (gethash 'version global-options) (write-line "1.0.0") (adopt:exit)) (multiple-value-bind (subtoplevel ui) (lookup-subcommand (first arguments)) (when (or (null subtoplevel) (gethash 'help global-options)) (adopt:print-help-and-exit ui)) (funcall subtoplevel (rest arguments)))))

Planet Lisp | 19-Apr-2021 02:00

Quicklisp news: April 2021 Quicklisp dist update now available

 New projects

  • cluffer — Library providing a protocol for text-editor buffers. — FreeBSD, see file LICENSE.text
  • data-frame — Data frames for Common Lisp — MS-PL
  • dfio — Common Lisp library for reading data from text files (eg CSV). — MS-PL
  • herodotus — Wrapper around Yason JSON parser/encoder with convenience methods for CLOS — BSD
  • lisp-stat — A statistical computing environment for Common Lisp — MS-PL
  • numerical-utilities — Utilities for numerical programming — MS-PL
  • nyxt — Extensible web-browser in Common Lisp — BSD 3-Clause
  • shop3 — SHOP3 Git repository — Mozilla Public License
  • special-functions — Special functions in Common Lisp — MS-PL
  • tfeb-lisp-hax — TFEB.ORG Lisp hax — MIT

Updated projects: 3bmd, 3d-matrices, alexandria, algae, anypool, april, array-operations, async-process, audio-tag, bdef, bp, canonicalized-initargs, cffi, chanl, ci-utils, cl+ssl, cl-autowrap, cl-change-case, cl-clon, cl-collider, cl-colors2, cl-coveralls, cl-cxx, cl-data-structures, cl-digraph, cl-environments, cl-gamepad, cl-gserver, cl-heredoc, cl-json-pointer, cl-kraken, cl-las, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-marshal, cl-maxminddb, cl-mixed, cl-mock, cl-patterns, cl-rabbit, cl-ses4, cl-shlex, cl-ssh-keys, cl-str, cl-strings, cl-typesetting, cl-utils, cl-webkit, clack, clods-export, clog, closer-mop, common-lisp-jupyter, computable-reals, concrete-syntax-tree, consfigurator, cricket, croatoan, cubic-bezier, cytoscape-clj, damn-fast-priority-queue, dataloader, defconfig, definitions-systems, dexador, doplus, eazy-documentation, eclector, enhanced-defclass, femlisp, file-attributes, flac-metadata, freesound, functional-trees, gadgets, gendl, glacier, golden-utils, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, helambdap, hunchenissr, hyperluminal-mem, imago, ironclad, json-mop, kekule-clj, lake, lass, lichat-protocol, linear-programming, linux-packaging, lisp-binary, listopia, magicl, maiden, markup, mcclim, mgl-pax, mito, multiposter, mutility, neural-classifier, nodgui, north, omer-count, origin, parachute, parsley, patchwork, perceptual-hashes, petalisp, plump, pngload, postmodern, qlot, quicklisp-stats, quilc, quri, random-uuid, sc-extensions, seedable-rng, sel, select, serapeum, shadow, shasht, slot-extra-options, sly, staple, static-dispatch, stripe, stumpwm, taglib, tfeb-lisp-tools, tfm, trivia, trivial-features, trivial-timer, ttt, umbra, umlisp, utilities.print-items, validate-list, vgplot, with-user-abort, zippy.

Removed projects: its

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Planet Lisp | 19-Apr-2021 01:41

Wimpie Nortje: A list of Common Lisp command line argument parsers.

I was searching for a command line option parser that can handle git-style sub-commands and found a whole bunch of libraries. It appears as if libraries on this topic proliferate more than usual.

I evaluated them only to the point where I could decide to skip it or give it a cursory test. The information I gathered is summarised below.

If you only need the usual flag and option processing, i.e. not sub-commands, then I would suggest unix-opts. It appears to be the accepted standard and is actively maintained. It is also suggested by both Awesome Common Lisp and the State of the Common Lisp Ecosystem Survey 2020.

If your needs are very complex or specific you can investigate clon, utility-arguments or ace.flag.

For basic flags and options with sub-commands, there are a few libraries that explicitly support sub-command processing but you should be able to make it work with many of the other options and a bit of additional code.

Name Print help Native sub-commands Notes ace.flag ? ? Not in QL. adopt Yes No Can generate man files. apply-argv No No Does not handle -xyz as three flags. cl-just-getopt-parser No No Easy to use. cl-cli Yes Yes   cl-argparse Yes Yes   cli-parser No No Does not handle free arguments, not in QL. clon ? Yes Very complex, most feature rich. command-line-arguments ? ? Not well documented. getopt No No Does not handle -xyz as three flags, not well documented. parse-args No No Not in QL utility-arguments ? ? Complex to set up unix-options Yes No Easy to use. unix-opts Yes No The standard recommendation.
Planet Lisp | 18-Apr-2021 02:00

Nicolas Hafner: Slicing Up the Game - April Kandria Update

What a hell of a month! We got a lot done, all of it culminating in the release of the new vertical slice demo! This demo is now live, and you can check it out for free!. This slice includes an hour or more of content for you to explore, so we hope you enjoy it!

Visuals and Level Design

Like last month, a good chunk of this month was spent designing the remaining areas we needed for the slice. However, this is also the part that got the most shafted compared to how much time I should be investing in it. I'm going to have to dedicate a month or two at some point to just doing rough levels and figuring out what works, both for platforming challenges and for combat. So far I've never actually taken the time to do this, so I still feel very uncertain when it comes to designing stuff.

Still, I'm fairly happy with at least the visual look of things. Fred has done some excellent work with the additional tile work I've requested from him, and I'm starting to learn how to mash different tiles together to create new environments without having to create new assets all the time.

I've also spent some time on the side making new palettes for the stranger. This was mostly for fun, but I think allowing this kind of customisation for the player is also genuinely valuable. At least I always enjoy changing the looks of the characters I play to my liking. There's 32 palettes already, but I'm still open for more ideas if you have any, by the way!

We're not quite sure yet how we want to present the palettes in-game. Probably allowing you to pick between a few in the settings, and having some others as items you have to discover first.

Gameplay changes

We've gone over the combat some more and tweaked it further. It's still a good shot away from what I'd like it to be, and I'll probably have to spend a full month at some point to improve it. Whatever the case, what we have now is already miles ahead of how things started out.

The player movement has also been slightly tweaked to fit better for the exploration and kinds of levels we've built, and to overall feel a bit smoother. The exact changes are very subtle, though I hope you'll still notice them, even if just subconsciously!

I've also added elevators back into the game. That lead to a bunch of days of frustrated collision problem fixes again, but still, elevators are an important part of the game, so I'm glad I've gotten around to adding them back in.

There's also been a bunch of improvements and fixes to the movement AI so NPCs can find their way better through the complicated mess of underground tunnels and caved in complexes.


Due to a number of people reporting problems with stutter, and generally the game showing slowdown even on my beefy machine, I put a bit of time into various optimisations. Chief among those is the reduction of produced garbage, which means the garbage collector will be invoked far less often, leading to fever GC pauses stuttering up the framerate. There's still a lot left to be done for that, but I'll do that another time.

I also finally got around to implementing a spatial query data structure - this is extremely useful as it massively reduces the time needed to do collision testing and so forth. What I've gone with is a much simplified bounding volume hierarchy tree (BVH), mostly because the concept is very simple to understand: every object in the scene you put into a box that encompasses it. You then group two such boxes at a time into another box that encompasses both. You keep doing that until you get one last box that encompasses everything.

If you now want to know which objects are contained in a region, you start testing the biggest box, and descend into the smaller boxes as long as that region is still a part of the box. If this tree of boxes is well balanced (meaning the closest objects are grouped together), it should reduce the number of tests you need to make drastically.

Implementing this was a surprisingly painless task that only took me a bout a day. Even if the BVH I have is most definitely not ideally balanced at every point in time, it's still good enough for now.


As you may or may not know, Kandria is built with a custom engine, and includes a fully featured editor of its own. This editor is shipped along with every version of the game, and you can open it up at any time by pressing the section key (below Escape).

This month I've made a number of improvements to the editor to add extra tools and fix a lot of issues to its stability. This was necessary to make my own life designing levels not completely miserable, but I think the editor is now also approaching a level of usability that should make it approachable by people not in the dev team, like you!

There's a bit of public documentation on the editor, so if you're interested in messing around with the existing levels, or even building your own, check it out! We're still intending on organising a level design contest as well, though for that I want to take some time to polish the editor even more, so you'll have to wait a bit longer for that. If that sounds exciting to you though, be sure to join our Discord, as we'll be organising the event through there whenever it comes to be.


There's been a number of improvements to the game's user interface. Chief among them being that dialogue choices are now displayed in a less confusing manner, but there's also been some additions to the main menu to allow you to save & quit the game, check your quest log and inventory, and check the button mappings.

We've also included some more accessibility options so that you can change the UI scaling to your liking, pick between different fonts if the default is hard to read, and to disable or tweak things like the gamepad rumble strength or the camera shake intensity.

Unfortunately we haven't had time to build a button remapping UI yet, though the game is already capable of doing the remapping for you. We'll definitely build such a UI in time for the full first act demo, though.

If you have other suggestions for accessibility improvements, please do let me know. Accessibility is very important to me, and I'd like to make Kandria a good example in that domain.


Last month we put out a listing for a composer for Kandria. The response to that was frankly astounding. Within two days we had gotten over a hundred applications, and within the week I had to close the listing down again as we were getting close to three hundred in total!

I knew there were going to be lots of applications, but still, I didn't expect this big of a response. Processing everything and evaluating all the applications took a fair amount of time out of the month, and it was really, really hard, too. So many of the pieces I listened to over the course of doing this were really fantastic!

We're still not quite done with the evaluation, though. We managed to whittle the list down to 10 for interviews, and from there to 3 for a third round. This third round is still going on now; the three were paid to produce a one minute track of music for a specific section of Kandria. The production process, communication, and how well the piece ultimately fits to our vision are going to help us decide who to pick.

The three finalists, Jacob Lincke, João Luís, and Mikel Dale, have all agreed to be named publicly, and to have their pieces published once they're done. The deadline for that is 18th of April, so you'll get to hear what they made in the next monthly update! After the deadline we hope to also finalise a contract with our pick until the end of the month, so that they can start with us in May, or shortly after.

I've heard some drafts from each of them already, and what they've produced is really good stuff. It has made me so excited to finally be able to not only see, but also properly hear Kandria!


Gamedev isn't all about just developing though, as you also have to worry about organisation, management, planning, marketing, and funding. The last is another thing that ate some days' worth of time this month. We were chosen by ProHelvetia to participate in the Global Games Pitch and Pocket Gamer Connects Digital. We're of course very grateful for these opportunities, and it's fantastic to be able to present Kandria at some events despite Corona!

Still, pitching is a very stressful affair for me, so preparing for GGP and actually executing it took a good bite out of me. On the flipside, we now have some good quality pitching material that we can much more easily adapt and re-use in the future as well. I haven't heard back from anyone about the pitch I did, so I don't have any feedback on what was good or bad about it, which is a shame. I didn't really expect to get any feedback from it though, so I can't say I'm upset about it either.

In any case, PG Connects Digital is happening in a little less than two weeks from now, so I'll have to make sure to be ready for that whenever it comes about.

Tim's recount

We've reached the vertical slice deadline - the quests are done now and feeling pretty good I think. The dialogue and structures have been refined with feedback from Nick; there's also been a fair amount of self-testing, and a couple of week's testing from our Discord, which has all helped tighten things up. I feel like there's a good balance between plot, character development, player expression, and non-linearity, while also teasing aspects of the wider setting and story. I'm still not totally sure how much playtime the quests constitute right now; I think it largely depends on how fast a player is at the gameplay, and how much they want to engage with the dialogue - but they do take them to the four corners of the current map, and there's some replayability in there too. It feels like a good chunk of content and a major part of the first act. I'm looking forward to seeing how people get on with them, and to learn from their feedback to tweak things further.

I've learnt lots of new scripting tricks in the dialogue engine to bring this together, which will be useful going forwards, and should make generating this amount of content much quicker in the future. Nick and I also have some ideas to improve the current quests, which we should be able to do alongside the next milestone's work.

This month I also helped Nick prepare for the Global Games Pitch event; it was great to watch the stream, and see how other developers pitched their projects. Hopefully this leads to some new opportunities for Kandria too!

Fred's recount

Added a lot of little things this month. Happy with the new content we got, though I wish I had been able to finish polishing the animations and attack moves on the Stranger for the vertical slice. I had kinda left those anims behind for a while, but I feel it's pretty helpful to gauge the combat feel better.

Otherwise, I am really stoked to get started with the game jam coming up. I love those, last one I did was for my birthday in 2019 and it was the best birthday present ever. :D

Going forward

As Fred mentioned, the next two weeks we'll be working on a new, secret project! But don't worry, it won't stay secret for very long, and we won't be putting Kandria off for long either. It's going to be a short two-week jam-type project, which we'll release at the end of the month, so you'll know what it is and get to play it by the next monthly update! If you're really curious though, you should sign up to our mailing list where we'll talk about the project next week already!

If you want to try out the new demo release, you'll get a download link when you subscribe, as well. I hope you enjoy it!

Planet Lisp | 11-Apr-2021 20:50

Joe Marshall: Can continuation passing style code perform well?

Continuation passing style is a powerful technique that allows you to abstract over control flow in your program. Here is a simple example: We want to look things up in a table, but sometimes the key we use is not associated with any value. In that case, we have to do something different, but the lookup code doesn't know what the caller wants to do, and the caller doesn't know how the lookup code works. Typically, we would arrange for the lookup code to return a special “key not found” value:(let ((answer (lookup key table))) (if (eq answer 'key-not-found) ... handle missing key ... ... compute something with answer...)

There are two minor problems with this approach. First, the “key not found” value has to be within the type returned by lookup. Consider a table that can only contain integers. Unfortunately, we cannot declare answer to be an integer because it might be the “key not found” value. Alternatively, we might decide to reserve a special integer to indicate “key not found”. The answer can then be declared an integer, but there is now a magic integer that cannot be stored in the table. Either way, answer is a supertype of what can be stored in the table, and we have to project it back down by testing it against “key not found”.

The second problem is one of redundancy. Presumably, somewhere in the code for lookup there is a conditional for the case that the key hasn't been found. We take a branch and return the “key not found” value. But now the caller tests the return value against “key not found” and it, too, takes a branch. We only take the true branch in the caller if the true branch was taken in the callee and we only take the false branch in the caller if the false branch was taken in the callee. In essence, we are branching on the exact same condition twice. We've reified the control flow, injected the reified value into the space of possible return values, passed it through the function call boundary, then projected and reflected the value back into control flow at the call site.

If we write this in continuation passing style, the call looks like this(lookup key table (lambda (answer) …compute something with answer) (lambda () …handle missing key…))lookup will invoke the first lambda expression on the answer if it is found, but it will invoke the second lambda expression if the answer is not found. We no longer have a special “key not found” value, so answer can be exactly the type of what is stored in the table and we don't have to reserve a magic value. There is also no redundant conditional test in the caller.

This is pretty cool, but there is a cost. The first is that it takes practice to read continuation passing style code. I suppose it takes practice to read any code, but some languages make it extra cumbersome to pass around the lambda expressions. (Some seem actively hostile to the idea.) It's just more obscure to be passing around continuations when direct style will do.

The second cost is one of performance and efficiency. The lambda expressions that you pass in to a continuation passing style program will have to be closed in the caller's environment, and this likely means storage allocation. When the callee invokes one of the continuations, it has to perform a function call. Finally, the lexically scoped variables in the continuation will have to be fetched from the closure's environment. Direct style performs better because it avoids all the lexical closure machinery and can keep variables in the local stack frame. For these reasons, you might have reservations about writing code in continuation passing style if it needs to perform.

Continuation passing style looks complicated, but you don't need a Sufficiently Smart™ compiler to generate efficient code from it. Here is lookup coded up to illustrate:(defun lookup (key table if-found if-not-found) (labels ((scan-entries (entries) (cond ((null entries) (if-not-found)) ((eq (caar entries) key) (if-found (cdar entries))) (t (scan-entries (cdr entries)))))) (scan-entries table)))and a sample use might be(defun probe (thing) (lookup thing *special-table* (lambda (value) (format t "~s maps to ~s." thing value)) (lambda () (format t "~s is not special." thing))))

Normally, probe would have to allocate two closures to pass in to lookup, and the code in each closure would have to fetch the lexical value of key from the closure. But without changing either lookup or probe we can (declaim (inline lookup)). Obviously, inlining the call will eliminate the overhead of a function call, but watch what happens to the closures:(defun probe (thing) ((lambda (key table if-found if-not-found) (labels ((scan-entries (table) (cond ((null entries) (if-not-found)) ((eq (caar entries) key) (if-found (cdar entries))) (t (scan-entries (cdr entries)))))) (scan-entries table))) thing *special-table* (lambda (value) (format t "~s maps to ~s." thing value)) (lambda () (format t "~s has no mapping." thing))))A Decent Compiler™ will easily notice that key is just an alias for thing and that table is just an alias for *special-table*, so we get:(defun probe (thing) ((lambda (if-found if-not-found) (labels ((scan-entries (entries) (cond ((null entries) (if-not-found)) ((eq (caar entries) thing) (if-found (cdar entries))) (t (scan-entries (cdr entries)))))) (scan-entries *special-table*))) (lambda (value) (format t "~s maps to ~s." thing value)) (lambda () (format t "~s has no mapping." thing))))and the expressions for if-found and if-not-found are side-effect free, so they can be inlined (and we expect the compiler to correctly avoid unexpected variable capture):(defun probe (thing) ((lambda () (labels ((scan-entries (entries) (cond ((null entries) ((lambda () (format t "~s has no mapping." thing)))) ((eq (caar entries) thing) ((lambda (value) (format t "~s maps to ~s." thing value)) (cdar entries))) (t (scan-entries (cdr entries)))))) (scan-entries *special-table*)))))and the immediate calls to literal lambdas can be removed:(defun probe (thing) (labels ((scan-entries (entries) (cond ((null entries) (format t "~s has no mapping." thing)) ((eq (caar entries) thing) (format t "~s maps to ~s." thing (cdar value)))) (t (scan-entries (cdr entries)))))) (scan-entries *special-table*)))

Our Decent Compiler™ has removed all the lexical closure machinery and turned the calls to the continuations into direct code. This code has all the features we desire: there is no special “key not found” value to screw up our types, there is no redundant branch: the (null entries) test directly branches into the appropriate handling code, we do not allocate closures, and the variables that would have been closed over are now directly apparent in the frame.

It's a bit vacuous to observe that an inlined function performs better. Of course it does. At the very least you avoid a procedure call. But if you inline a continuation passing style function, any Decent Compiler™ will go to town and optimize away the continuation overhead. It's an unexpected bonus.

On occasion I find that continuation passing style is just the abstraction for certain code that is also performance critical. I don't give it a second thought. Continuation passing style can result in high-performance code if you simply inline the critical calls.

Planet Lisp | 10-Apr-2021 18:55

Joe Marshall: Early LISP Part II (Apply redux)

By April of 1959, issues with using subst to implement β-reduction became apparent. In the April 1959 Quarterly Progress Report of the Research Laboratory of Electronics, McCarthy gives an updated definition of the universal S-function apply: apply[f;args]=eval[cons[f;appq[args]];NIL] where appq[m]=[null[m]→NIL;T→cons[list[QUOTE;car[m]];appq[cdr[m]]]] and eval[e;a]=[ atom[e]→eval[assoc[e;a];a]; atom[car[e]]→[ car[e]=QUOTE→cadr[e]; car[e]=ATOM→atom[eval[cadr[e];a]]; car[e]=EQ→[eval[cadr[e];a]=eval[caddr[e];a]]; car[e]=COND→evcon[cdr[e];a]; car[e]=CAR→car[eval[cadr[e];a]]; car[e]=CDR→cdr[eval[cadr[e];a]]; car[e]=CONS→cons[eval[cadr[e];a];eval[caddr[e];a]]; T→eval[cons[assoc[car[e];a];evlis[cdr[e];a]];a]]; caar[e]=LABEL→eval[cons[caddar[e];cdr[e]];cons[list[cadar[e];car[e]];a]]; caar[e]=LAMBDA→eval[caddar[e];append[pair[cadar[e];cdr[e]];a]] and evcon[c;a]=[eval[caar[c];a]→eval[cadar[c];a];T→evcon[cdr[c];a]] and evlis[m;a]= [null[m]→NIL;T→cons[list[QUOTE;eval[car[m];a]]; evlis[cdr[m];a]]

I find this a lot easier to understand if we transcribe it into modern Common LISP:;;; Hey Emacs, this is -*- Lisp -*- (in-package "CL-USER") ;; Avoid smashing the standard definitions. (shadow "APPLY") (shadow "ASSOC") (shadow "EVAL") (defun apply (f args) (eval (cons f (appq args)) nil)) (defun appq (m) (cond ((null m) nil) (t (cons (list 'QUOTE (car m)) (appq (cdr m)))))) (defun eval (e a) (cond ((atom e) (eval (assoc e a) a)) ((atom (car e)) (cond ((eq (car e) 'QUOTE) (cadr e)) ((eq (car e) 'ATOM) (atom (eval (cadr e) a))) ((eq (car e) 'EQ) (eq (eval (cadr e) a) (eval (caddr e) a))) ((eq (car e) 'COND) (evcon (cdr e) a)) ((eq (car e) 'CAR) (car (eval (cadr e) a))) ((eq (car e) 'CDR) (cdr (eval (cadr e) a))) ((eq (car e) 'CONS) (cons (eval (cadr e) a) (eval (caddr e) a))) (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a)))) ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e)) (cons (list (cadar e) (car e)) a))) ((eq (caar e) 'LAMBDA) (eval (caddar e) (append (pair (cadar e) (cdr e)) a))))) (defun evcon (c a) (cond ((eval (caar c) a) (eval (cadar c) a)) (t (evcon (cdr c) a)))) (defun evlis (m a) (cond ((null m) nil) (t (cons (list 'QUOTE (eval (car m) a)) (evlis (cdr m) a))))) ;;; Modern helpers (defun assoc (k l) (cadr (cl:assoc k l))) (defun pair (ls rs) (map 'list #'list ls rs)) (defun testit () (apply '(label ff (lambda (x) (cond ((atom x) x) ((quote t) (ff (car x)))))) (list '((a . b) . c))))

There are a few things to notice about this. First, there is no code that inspects the value cell or function cell of a symbol. All symbols are evaluated by looking up the value in the association list a, so this evaluator uses one namespace. Second, the recursive calls to eval when evaluating combinations (the last clause of the inner cond and the LABEL and LAMBDA clauses) are in tail position, so this evaluator could be coded up tail-recursively. (It is impossible to say without inspecting the IBM 704 assembly code.)

What is most curious about this evaluator is the first clause in the outer cond in eval. This is where variable lookup happens. As you can see, we look up the variable by calling assoc, but once we obtain the value, we call eval on it. This LISP isn't storing values in the environment, but rather expressions that evaluate to values. If we look at the LAMBDA clause of the cond, the one that handles combinations that begin with lambda expressions, we can see that it does not evaluate the arguments to the lambda but instead associates the bound variables with the arguments' expressions. This therefore has call-by-name semantics rather than the modern call-by-value semantics.

By April 1960 we see these changes:(defun eval (e a) (cond ((atom e) (assoc e a)) ((atom (car e)) (cond ((eq (car e) 'QUOTE) (cadr e)) ((eq (car e) 'ATOM) (atom (eval (cadr e) a))) ((eq (car e) 'EQ) (eq (eval (cadr e) a) (eval (caddr e) a))) ((eq (car e) 'COND) (evcon (cdr e) a)) ((eq (car e) 'CAR) (car (eval (cadr e) a))) ((eq (car e) 'CDR) (cdr (eval (cadr e) a))) ((eq (car e) 'CONS) (cons (eval (cadr e) a) (eval (caddr e) a))) (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a)))) ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e)) (cons (list (cadar e) (car e)) a))) ((eq (caar e) 'LAMBDA) (eval (caddar e) (append (pair (cadar e) (evlis (cdr e) a)) a)))))Note how evaluating an atom now simply looks up the value of the atom in the association list and evaluation of a combination of a lambda involves evaluating the arguments eagerly. This is a call-by-value interpreter.

Planet Lisp | 03-Apr-2021 18:12

Max-Gerd Retzlaff: uLisp on M5Stack (ESP32):
Stand-alone uLisp computer (with code!)

Last Thursday, I started to use the m5stack faces keyboard I mentioned before and wrote a keyboard interpreter and REPL so that this makes another little handheld self-containd uLisp computer. Batteries are included so this makes it stand-alone and take-along. :)

I have made this as a present to my nephew who just turned eight last Saturday. Let's see how this can be used to actually teach a bit of Lisp. The first programming langauge needs to be Lisp, of course!

Read the whole article.

Planet Lisp | 02-Apr-2021 14:56

Joe Marshall: Early LISP

In AI Memo 8 of the MIT Research Laboratory of Electronics (March 4, 1959), John McCarthy gives a definition of the universal S-function apply: apply is defined by apply[f;args]=eval[combine[f;args]] eval is defined by eval[e]=[ first[e]=NULL→[null[eval[first[rest[e]]]]→T;1→F] first[e]=ATOM→[atom[eval[first[rest[e]]]]→T;1→F] first[e]=EQ→[eval[first[rest[e]]]=eval[first[rest[rest[e]]]]→T; 1→F] first[e]=QUOTE→first[rest[e]]; first[e]=FIRST→first[eval[first[rest[e]]]]; first[e]=REST→rest[eval[first[rest[e]]]; first[e]=COMBINE→combine[eval[first[rest[e]]];eval[first[rest[rest [e]]]]]; first[e]=COND→evcon[rest[e]] first[first[e]]=LAMBDA→evlam[first[rest[first[e]]];first[rest[rest [first[e]]]];rest[e]]; first[first[e]]=LABELS→eval[combine[subst[first[e];first[rest [first[e]]];first[rest[rest[first[e]]]]];rest[e]]]] where: evcon[c]=[eval[first[first[c]]]=1→eval[first[rest[first[c]]]]; 1→evcon[rest[c]]] and evlam[vars;exp;args]=[null[vars]→eval[exp];1→evlam[ rest[vars];subst[first[vars];first[args];exp];rest[args]]]McCarthy asserts that “if f is an S-expression for an S-function φ and args is a list of the form (arg1, …, argn) where arg1, ---, argn are arbitrary S-expressions then apply[f,args] and φ(arg1, …, argn) are defined for the same values of arg1, … argn and are equal when defined.”

I find it hard to puzzle through these equations, so I've transcribed them into S-expressions to get the following:;;; Hey Emacs, this is -*- Lisp -*- (in-package "CL-USER") ;; Don't clobber the system definitions. (shadow "APPLY") (shadow "EVAL") (defun apply (f args) (eval (combine f args))) (defun eval (e) (cond ((eq (first e) 'NULL) (cond ((null (eval (first (rest e)))) t) (1 nil))) ((eq (first e) 'ATOM) (cond ((atom (eval (first (rest e)))) t) (1 nil))) ((eq (first e) 'EQ) (cond ((eq (eval (first (rest e))) (eval (first (rest (rest e))))) t) (1 nil))) ((eq (first e) 'QUOTE) (first (rest e))) ((eq (first e) 'FIRST) (first (eval (first (rest e))))) ((eq (first e) 'REST) (rest (eval (first (rest e))))) ((eq (first e) 'COMBINE) (combine (eval (first (rest e))) (eval (first (rest (rest e)))))) ((eq (first e) 'COND) (evcon (rest e))) ((eq (first (first e)) 'LAMBDA) (evlam (first (rest (first e))) (first (rest (rest (first e)))) (rest e))) ((eq (first (first e)) 'LABELS) (eval (combine (subst (first e) (first (rest (first e))) (first (rest (rest (first e))))) (rest e)))))) (defun evcon (c) (cond ((eval (first (first c))) (eval (first (rest (first c))))) (1 (evcon (rest c))))) (defun evlam (vars exp args) (cond ((null vars) (eval exp)) (1 (evlam (rest vars) (subst (first args) (first vars) exp) (rest args)))))We just have to add a definition for combine as a synonym for cons and this should run:* (eval '(eq (first (combine 'a 'b) (combine 'a 'c)))) T

As Steve “Slug” Russell observed, eval is an interpreter for Lisp. This version of eval uses an interesting evaluation strategy. If you look carefully, you'll see that there is no conditional clause for handling variables. Instead, when a lambda expression appears as the operator in a combination, the body of the lambda expression is walked and the bound variables are substituted with the expressions (not the values!) that represent the arguments. This is directly inspired by β-reduction from lambda calculus.

This is buggy, as McCarthy soon discovered. In the errata published one week later, McCarthy points out that the substitution process doesn't respect quoting, as we can see here:* (eval '((lambda (name) (combine 'your (combine 'name (combine 'is (combine name nil))))) 'john)) (YOUR 'JOHN IS JOHN)With a little thought, we can easily generate other name collisions. Notice, for example, that the substitution will happily substitute within the bound variable list of nested lambdas.

Substitution like this is inefficient. The body of the lambda is walked once for each bound variable to be substituted, then finally walked again to evaluate it. Later versions of Lisp will save the bound variables in an environment structure and substitute them incrementally during a single evaluation pass of the lambda body.

Planet Lisp | 29-Mar-2021 17:07

Jonathan Godbout: Cl-Protobufs Enumerations

In the last few posts we discussed family life, and before that we created a toy application using cl-protobufs and the ACE lisp libraries. Today we will dive deeper into the cl-protobufs library by looking at Enumerations. We will first discuss enumerations in Protocol Buffers, then we will discuss Lisp Protocol Buffer enums.


Most modern languages have a concept of enums. In C++ enumerations are compiled down to integers and you are free to use integer equality. For example

enum Fish { salmon, trout, } void main { std::cout << salmon == 0 << std::endl; }

Will print true. This is in many ways wonderful: enums compile down to integers and there's no cost to using them. It is baked into the language! 

Protocol Buffers are available for many languages, not just C++. You can find the documentation for Protocol Buffer enums here:

Each language has its own way to support enumeration types. Languages like C++ and Java, which have built-in support for enumeration types, can treat protobuf enums like any other enum. The above enum could be written (with some caveats) in Protocol Buffer as:

enum Fish { salmon = 0; trout = 1; }

You should be careful though, Protoc will give a compile warning that enum 0 should be a default value, so 

enum Fish { default = 0; salmon = 1; trout = 2; }

Is preferred.

Let’s get into some detail for the two variants of Protocol Buffers in use.

// Example message to use below. enum Fish { default = 0; salmon = 1; trout = 2; } message Meal { {optional} Fish fish; }

The `optional` label will only be written for proto 2.

Proto 2:

In proto 2 we can always tell whether `` was set. If the field has the `required` label then it must be set, by definition. (But the `required` label is considered harmful; don’t use it.) If the field has an `optional` label then we can check if it has been set or not, so again a default value isn’t necessary.

If the enum is updated to:

// Example message to use below. enum Fish { default = 0; salmon = 1; trout = 2; tilapia = 3; }

and someone sends fish = tilapia to a system where tilapia isn't a valid entry, the library is allowed to do whatever it wants! In Java it sets it to the first entry, so would be default! 

Proto 3

In proto3 if the value of is not set, calling its accessor will return the default value which is always the zero value. There is no way to check whether the field was explicitly set. A default value (i.e., a name that maps to the value zero) must always be given, else the user will get a compile error.

If the Fish enum was updated to contain tilapia as above, and someone sent a proto message containing tilapia to a system with an older program that had the message not containing tilapia, the deserializer should save the enum value. That is, the underlying data structure should know it received a "3" for the fish field in Meal. How the accessors return this value is language dependent. Re-serializing the message should preserve this "unrecognized" value.

A common example is: A gateway system wants to do something with the message and then forward it to another system. Even though the middle system has an older schema for the Fish message it needs to forward all the data to the downstream system.


Now that we understand the basics of enumerations, it is important to understand how cl-protobufs records enumeration values

Lisp as a language does not have a concept of enumerations; what it does understand is keywords. Taking fish as above and running protoc we will get (see readme

(deftype fish '(:default :salmon :trout)) (defun fish-to-int (keyword) (ecase keyword (:default 0) (:salmon 1) (:trout 2))) (defun int-to-fish (int) (ecase int (0 :default) (1 :salmon) (2 :trout)))

Looking at the tilapia example, the enum deserializer preserves the unknown field in both proto2 and proto3. Calling an accessor on a field containing an unknown value will return :%undefined-n. So for tilapia we will see :%undefined-3.

Warning: To get this to work properly we have to remove type checks from protocol buffer enumerations. You can set the field value in a lisp protocol buffer message to any keyword you want, but you will get a serialization error when you try to serialize. This was a long discussion internally, but that design discussion could turn into a blog post of its own.


The enumeration fields in cl-protobufs are fully proto2 and proto3 compliant. To do this we had to remove type checking. As a consumer, it is suggested that you always type check and handle undefined enumeration values in your usage of protocol buffer enums. We give you a deftype to easily check.

I hope you have enjoyed this deep dive into cl-protobuf enums. We strive to remove as many gotchas as possible.

Thanks to Ron and Carl for the continual copy edits and improvements!

Planet Lisp | 28-Mar-2021 04:30

Max-Gerd Retzlaff: uLisp on M5Stack (ESP32):
temperature sensors via one wire

I added support for Dallas temperature sensors to ulisp-esp-m5stack. Activate #define enable_dallastemp in order to use it. It bases on the Arduino libraries OneWire.h DallasTemperature.h.

I used pin 16 to connect my sensors but you can change ONE_WIRE_BUS to use a different pin. As the OneWire library uses simple bit bagging and no hardware support, e. g. UART, any general-purpose input/output (GPIO) pin will work.

The interface consists of four uLisp functions: INIT-TEMP, GET-TEMP, SET-TEMP-RESOLUTION, and GET-TEMP-DEVICES-COUNT. Here is their documentation:

Function init-temp Syntax: init-temp => result-list

Arguments and values:
   result-list---a list of device addresses; each address being a list of 8 integer values specifying a device address.

   Detects all supported temperature sensors connected via one wire bus to the pin ONE_WIRE_BUS and returns the list of the sensors' device addresses.

   All sensors are configured to use the resolution specified by default DEFAULT_TEMPERATURE_PRECISION via a broadcast. Note that a sensor might choose a different resolution if the desired resolution is not supported. See also: set-temp-resolution.

Function get-temp Syntax: get-temp address => temperature

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   temperature---an integer value; the measured temperature in Celsius.

   Requests the sensor specified by address to measure and compute a new temperature reading, retrieves the value from the sensor device and returns the temperature in Celsius.

Function set-temp-resolution Syntax: set-temp-resolution address [resolution] => actual-resolution

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   resolution---an integer value.

   actual-resolution---an integer value.

   Tries to configure the sensor specified by address to use the given resolution and returns the actual resolution that the devices is set to after the attempt.

   Note that a sensor might choose a different resolution if the desired resolution is not supported. In this case, the returned actual-resolution differs from the argument resolution.

If the argument resolution is missing, instead the default given by DEFAULT_TEMPERATURE_PRECISION is used.

Function get-temp-devices-count Syntax: get-temp-devices-count => count

Arguments and values:
   count---an integer value; the number of detected supported temperature sensors.

   Returns the number of temperature sensors supported by this interface that were detected by the last call to INIT-TEMP. Note that this might not be the correct current count if sensors were removed or added since the last call to INIT-TEMP.

Findings from reading DallasTemperature.h and DallasTemperature.cpp

These are the notes I wrote down when reading the source code of the Dallas temperature sensor library and my conclusion how to best use it which lead to my implementation for uLisp.

1. The process of counting the number of devices is efficiently done in parallel by a binary tree algorithm.

2. The result of the search is the number of devices with their addresses.

3. The DallasTemperature library keeps only a count of devices and a count of supported temperature sensors (ds18Count) in memory, not an indexed list of addresses. This is done in DallasTemperature::begin() by doing a search but only the counts are kept, no addresses are stored. Sadly, it also does not return anything.

4. getAddress() does a search again to determine the address for an device index. So it is faster to just get a sensor reading by using the address not the index, it safes one search.

5. Sadly, there is not command to get a list of addresses in a row. So at least once you have to do getAddress() to actually get the addresses of all devices.

5. requestTemperature() can be applied to a single device only or to all devices in parallel. It is as fast to request a temperature from all devices as only one device.

6. Actually getting the temperature reading works only one at a time. getTemp*(deviceAddress) is faster than getTemp*ByIndex(index) as the latter has to do a search first (see 4.).

7. There are these temperature resolutions: 9, 10, 11, and 12 bits. The conversion (=reading) times are:
9 bit – 94 ms
10 bit – 188 ms
11 bit – 375 ms
12 bit – 750 ms

8. setResolution() can either set all devices in parallel or only set one device at a time (only by address, there is no setResultionByIndex()).

9. The temperatures are internally stored in 1/128 degree steps. This is the "raw" readings returned by DallasTemperature::getTemp() as int16_t.

DallasTemperature::getTempC returns "(float) raw * 0.0078125f" and
DallasTemperature::getTempF returns "((float) raw * 0.0140625f) + 32.0f".

In case of an error,
getTempC() will return DEVICE_DISCONNECTED_C which is "(float)-127",
getTempF() will return DEVICE_DISCONNECTED_F which is "(float)-196.6", and
getTemp() will return DEVICE_DISCONNECTED_RAW which is "(int16_t)-7040", respectively.

10. If you don't need the actual temperature but just to monitor that the temperature is in a defined range, it is not necessary to read the temperatures at all (which has to happen one sensor at a time). Instead, you can use the alarm signaling.

For that, you can set a high and a low alarm temperature per device and then you can do an alarm search to determine in parallel if there are sensors with alarms. The range can be half open, that is you can also only define high and low alarm temperatures.

DallasTemperature::alarmSearch() returns one device address with an alarm at a time. It is also possible to install an alarm handler and then call DallasTemperature::processAlarms() which will do repeated alarm searches and call the handler for each device with an alarm.

11. isConnected(deviceAddress) can be used to determine if a certain sensor is still available. It will return quickly when it is not but transfer a full sensor reading in case it is still available. The library currently does not support a case where parallel search is used to determine if known devices are still present.

12. The search is deterministic, it seems, so as long as you don't change sensors, the indices stay the same. If you add and remove a sensor, existing sensor might get new indices. So it seems actually not to be safe to use *ByIndex() functions.

13. getDeviceCount() gives you the number of all devices, getDS18Count() the number of all supported DS18 sensors. But no function gives you the list of indices or addresses of all supported DS19 sensors.

validFamily(deviceAddress) lets you check by address if a device is supported. Supported are DS18S20MODEL (also DS1820), DS18B20MODEL (also MAX31820), DS1822MODEL, DS1825MODEL, and DS28EA00MODEL.

getAddress() just checks if the address is valid (using validAddress(deviceAddress)) but not if the device is actually known. As getAddress() already calls validAddress() for you, there should be no need to ever call validAddress() from user code. If you just request a temperature from all devices till getDeviceCount() you'll also send requests to unsupported devices.

In conclusion, this seems to be the best approach to setup all devices:

  1. Call getDS18Count() once to determine that there are any supported temperature sensors at all.
  2. Iterate over all devices, that is, from index "0" up to "getDeviceCount() - 1".
  3. Call getAddress() for each index (this will also check validAddress())
  4. and then call validFamily() for the address.
  5. If validFamily() returns true, store the address for later temperature readings.
  6. This is also a good time to call setResolution() as per default each device is left at its individual default resolution if you have sensors of different kinds. Either call getResolution(newResolution) to set all devices in parallel, or setResolution(address, newResolution) in the loop right after each call to validFamily() to set up individual resolutions.

To read sensor values:

  1. Call requestTemperature() to request all sensors to do new readings in parallel,
  2. then iterate over the stored list of DS18 addresses and
  3. call getTempC(address), getTempF(address), or getTemp(address) for each address and
  4. check for error return values (see Finding 9.).

Note: getTempC() and getTempF() will call getTemp() internally and that one will also use isConnected(). So there should be no need to call isConnected() from user code if you check for the error return values of the functions (see Finding 8.)

This is the last thing I promised to release in my previous post of February 15, 2021. Documentation takes time! But I programmed new features last Thursday so stay tuned.

See also "Curl/Wget for uLisp", time via NTP, lispstring without escaping and more space, flash support, muting of the speaker and backlight control and uLisp on M5Stack (ESP32).

Read the whole article.

Planet Lisp | 27-Mar-2021 21:18

Alexander Artemenko: litterae

This is yet another Common Lisp documentation builder.

It renders beautiful one-page docs. Sadly, Litterae itself has no documentation. I've used it to document Teddy and also created a template project for you:

Why Litterae can be interesting for you:

  • It uses as documentation source.
  • Litterae builds API reference from docstrings.
  • It provides four color themes.
  • It uses LSX for templating and probably there is a way to create a custom HTML.

However, there are many problems with Litterae. It seems not mature enough to use in my projects. To name a single feature I miss is cross-referencing. That is the main reason, why I'll not use it for my libraries.

Planet Lisp | 25-Mar-2021 22:55

Didier Verna: Clon 1.0b25 is out

Today, I'm releasing the next beta version of Clon, my command-line options management library.

The previous official release occurred 6 years ago. Since then, a number of changes had been quietly sleeping in the trunk but never made their way into Quicklisp. More recently, I have also applied a number of changes that are worth mentioning here.

First of all, a large part of the infrastructure as been updated, following the evolution of the 8 supported compilers, and that of ASDF and CFFI as well. This should normally be transparent to the user though, provided that one uses reasonably recent compiler / ASDF version ("reasonably" intentionally left undefined). Other than that...

  • The constraints on termio support auto-detection had become slightly too restrictive, so they have been relaxed.
  • The exit function has been deprecated in favor of uiop:quit.
  • The support for running in scripts rather than in dumped executables has been improved, notably by offering the possibility to provide an alternate program name when argv0 is not satisfactory.
  • Clon is now compatible with executables dumped via ASDF's program-op operation, or dumped natively. The demonstration programs in the distribution have been updated to illustrate both dumping methods (ASDF, and Clon's dump function).
  • The documentation on application delivery has been largely rewritten, and has become a full chapter rather than a thin appendix.

There are also a few bug fixes in this release.

  • Several custom readtable problems have been fixed for CCL, CLISP, and ECL (thanks to Thomas Fitzsimmons). Note that Clon depends on named-readtables now.
  • Clon now compiles its termio support correclty with a C++ based ECL (thanks to Pritam Baral).
  • One problem in the conversion protocol for path options has been corrected (thanks to Olivier Certner).

All entrey points are on Clon's web page.


Planet Lisp | 24-Mar-2021 01:00

Marco Antoniotti: HEΛPing ASDF

 ... more fixing and, ça va sans dire, more creeping features.

I got prodded to integrate HEΛP with other tools; mostly, of course, ASDF.  A simple solution was to define a document-op for a system.  After jumping through a few hoops, the solution was to use the :properties of a system to pile up arguments for the main HEΛP document function (well, only one for the time being).  Bottom line, suppose you have:

(asdf:defsystem "foosys" :pathname #P"D:/Common Lisp/Systems/foosys/")

now you just issue

(asdf:operate 'hlp:document-op "foosys")

and the documentation for the system "foosys" will appear in the "docs/html/" subfolder.

If you want to pass a title to the document function, you set up your system as:

(asdf:defsystem "foosys" :properties (:documentation-title "The FOO Omnipotent Tool" ) :pathname #P"D:/Common Lisp/Systems/foosys/")

and the parameter will be used (instead of the bare system name).

It works! 

Some more fixing and more extensions may be needed (hlp:document takes a lot of parameters) but it is already usable.

All the necessary bits and pieces are in the HEΛP repository, and they should get into Quicklisp in the next release.



Planet Lisp | 20-Mar-2021 12:02

Marco Antoniotti: Need more HEΛP?

Just a quick note for people following these... parentheses.

I have carved out some time to do some more Lisp hacking and this lead me to look at the very nice usocket library (I want to do some network programming).  The usocket library documentation page has a bit of an "old" and "handcrafted" look and feel to it, so I tried to produce a version of the documentation with help from my HEΛP library.

Well, it turns out that usocket has some more than legitimate code within it that my HEΛP library was not handling; even worse, it unearthed a bug in the Lambda List parsing routines.

As an example, usocket uses the following idiom to set some of the documentation strings.

(setf (documentation 'fun 'function) "Ain't this fun?")

This is perfectly fine, but it needed some extra twist to get HEΛP do what is, IMHO, the right thing: in this case it meant ensuring that the lambda list of the function was properly rendered in the final documentation.

Apart from that, a few not so nice buglets were exposed in the code parsing lambda lists.  The result is that now the logic of that piece of code is simpler and somewhat cleaner.

So, if you want to get HEΛP to document your Common Lisp code, give it a spin.


Planet Lisp | 16-Mar-2021 19:40

Nicolas Hafner: Going Underground - March Kandria Update

I can't believe it's been two months already since the year started. Time moves extremely quickly these days. Anyway, we have some solid progress to show, and some important announcements to make this month, so strap in!

Overall progress

Last month was a big update with a lot of new content, particularly all the custom buildings Fred and I had put together to build the surface camp. This month involved a lot more of that, but for the first underground region. This region is still very close to the surface, so it'll be composed out of a mix of ruins of modern corporate architecture, and natural caves.

As before, figuring out a fitting style was very challenging, even disregarding the fact that it has to be in ruins, as well. Still, I think what we put together, especially combined with Kandria's lighting system, creates a great amount atmosphere and evokes that feeling of eerie wonder that I've always wanted to hit.

Mushrooms are a big part of the ecosystem in Kandria, being the primary food source for the underground dwellers, so I couldn't resist adding giant mushrooms to the caves.

On the coding side there's been a bunch of bugfixing and general improvement going on. The movement AI can now traverse the deep underground regions seemingly without problem. Game startup speed is massively improved thanks to some caching of the movement data, and NPCs can now climb ropes and use teleporters when navigating.

We've also spent some time working on the combat again, adding some extra bits that, while seemingly small, change the feel quite a lot. Attacks now have a cooldown that forces you to consider the timing, and inputs are no longer buffered for the entire duration of an animation, which eliminates the feeling of lag that was prevalent before. Fred also tuned some of the player's attack animations some more and while I couldn't tell you what exactly changed, when I first tried it out I immediately noticed that it felt a lot better!

All of this just further reaffirms my belief that making a good combat system involves a ton of extremely subtle changes that you wouldn't notice at all unless you did a frame-by-frame analysis. It all lies in the intuition the system builds up within you, which makes it hard to tune. I'm sure we'll need to do more rounds of tuning like that as we progress.

Then I've also reinstated the wolf enemy that I first worked on close to a year ago. The AI is a lot simpler now, but it also actually works a lot better. It's still a bit weird though, especially when interacting with slopes and obstacles, but it does make for a nice change of pace compared to the zombie enemy. We'll have to see how things turn out when they're placed in the context of actual exploration and quests, though.

Another feature I resurrected and finally got to work right is the ability to save and load regions from zip files. This makes it easy to exchange custom levels. The editor used for the game is shipped with the game and always available at the press of a button, so we're hoping to use that in combination with the zip capability to organise a small level design contest within the community. We'll probably launch that in April, once the new demo hits. If you like building or playing levels, keep an eye out!

The biggest chunk of work this month went into doing level design. I've been putting that off for ages and ages, because it's one of those things that I'm not very familiar with myself, so it seems very daunting. I don't really know where to start or how to effectively break down all the constraints and requirements and actually start building a level around them, let alone a level that's also fun to traverse and interesting to look at! It was so daunting to me in fact, that I couldn't work on anything at all for one day because I was just stuck in a sort of stupor.

Whatever the case though, the only way to break this mould and get experience, and thus some confidence and ability in making levels, is to actually do it. I've put together the first part of the first region now, though it's all still very rough and needs a ton more detail and playtesting.

The part above is the surface settlement, with the city ruins to the right. Below the camp lies the central hub of the first region, which links up to a variety of different rooms - an office, a market, an apartment complex, and several natural caves that formed during the calamity. The sections below the city ruins don't belong to the slice, but will be part of the full "first act demo" that we plan to release some months after the slice.

Even with all the tooling I built that allows you to easily drag out geometry and automatically tile a large chunk of it, it still takes a ton of time to place all the little details like chairs, doors, railings, machines, plants, broken rubble, background elements, to vary the elements and break up repetition, etc. It also takes a lot of extra effort to ensure that the tiles work correctly in this pseudo-isometric view we have going on for the rooms. Still, the rooms do look a lot better like this than they did with my initial heads-on view, so I think we'll stick with it even if it costs us more time to build.


I've finally gotten around to documenting the dialogue system I've developed for Kandria. I've given it the name Speechless, since it's based on Markless. It's designed to be engine-independent, so if you have your own game in Lisp and need a capable dialogue logic system, you should be able to make use of it. If you do, please tell me about it, I'd be all ears!

I'd also be interested to hear from other narrative designers on what they think of it. I can't say I'm familiar with the tools that are used in other engines - a lot of it seems to be in-house, and frequently based around flow-charts from what I can tell. Having things completely in text does remove some of the visual clarity, but I think it also makes it a lot quicker to put things together.

Now, I know that the Lisp scene is very small, and the games scene within it even smaller, so I don't think Speechless will gain much traction, but even if it itself won't, I hope that seeing something like it will at least inspire some to build similar systems, as I think this text based workflow can be extremely effective.

Hiring a musician

I'm hiring again! Now that Kandria's world is properly coming together it's time to look at a composer to start with a soundtrack to really bring the world to life. Music is extremely important to me, so I wanted to wait until we had enough of the visuals together to properly inform the mood and atmosphere. I'm still having a lot of trouble imagining what the world should actually sound like, and there's a broad range of music I like, so I hope that I can find someone that can not only produce a quality score, but also help figure out the exact sound aesthetics to go for.

If you are a musician, or know musicians that are looking for work, have a look at the listing!

Tim's recount

Quests quests and quests! I've got the core gameplay scripting done for most of the vertical slice quests now. The last couple are still using placeholder dialogue, but for the others I've done several drafts in the voice of the characters, sprinkling in player choices here and there and yeah - it feels like it's coming together. Hopefully it's familiarising the player with the characters, their unique voices, and their motivations, whilst keeping the gameplay and plot momentum moving forwards. I've now written in anger for all of the main hub characters, and feel like I'm getting into their headspace.

Some of the scripting functionality has been more complex than I anticipated - but with help from Nick creating new convenience functions, and showing me the best way to structure things, I feel like I've gotten most of the design patterns down now that I'm going to need going forwards.

The rest of the month will involve rounding out these quests, iterating on feedback, and transposing the triggers (which are still using debug locations) into the main region layout

Fred's recount

Quite a lot of character anims in! It'll be exciting to see the camp characters come to life in the game and not just in my animation software. 

This feels like this month was an important milestone at making Kandria's world more immersive. There's still more work to do on the buildings and getting convincing yet fun to explore ruins but overall it feels like a lot of stuff is coming together.

The future

This is the last month we had in our plan for the vertical slice. Unfortunately it turns out that we had way underestimated the amount of time it would take to create the required tilesets and design the levels. Still, it seems much more important to avoid crunch, and to deliver a quality slice, so we're looking to extend the deadline.

We'll still try to release an early slice for our testers by the end of this month, but then we'll take two additional weeks for bugfixing and polish, so the updated public demo should be out mid-April. We'll be sure to make an announcement when it comes out or if there's other problems that'll further delay it. Please bear with us!

The remainder of April though we're planning to completely switch gears away from Kandria and catch a mental breather. We'll instead work on a new, very small jam project, that we hope to build and release within the two weeks. We're not entirely certain yet what exactly we'll do, but it should be a lot of fun to do a jam again one of these days.

As always, thank you very much for reading and in general for your interest in Kandria! Starting from scratch like we are (in multiple ways at that!) isn't easy, and it's been really nice to see people respond and support the project.

If you'd like to support us, it would help a lot to wishlist Kandria on Steam, and to join the Discord! There's also a lot of additional information on the development and our current thoughts in the weekly mailing list updates and my Twitter.

Planet Lisp | 06-Mar-2021 14:33

Quicklisp news: February 2021 Quicklisp dist update now available

 New projects

  • audio-tag — tool to deal with audio tags. read and write — BSD-2-Clause License
  • canonicalized-initargs — Provides a :canonicalize slot option accepting an initarg canonicalization function. — Unlicense
  • cl-debug-print — A reader-macro for debug print — MIT
  • cl-json-schema — Describe cl-json-schema here — Specify license here
  • cl-ses4 — AWS SES email sender using Signature Version 4 of Amazon's API — Public Domain
  • cl-telebot — Common Lisp Telegram Bot API — MIT
  • consfigurator — Lisp declarative configuration management system — GPL-3+
  • cricket — A library for generating and manipulating coherent noise — MIT
  • cubic-bezier — A library for constructing and evaluating cubic Bézier curve paths. — MIT
  • defconfig — A configuration system for user exposed variables — GPLv3
  • enhanced-defclass — Provides a truly extensible version of DEFCLASS that can accurately control the expansion according to the metaclass and automatically detect the suitable metaclass by analyzing the DEFCLASS form. — Unlicense
  • freesound — A client for — MIT
  • mnas-graph — Defines basic functions for creating a graph data structure and displaying it via graphviz. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • mnas-hash-table — Defines some functions for working with hash tables. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • nyaml — Native YAML Parser — MIT
  • pvars — easily define persistent variables — MIT
  • random-uuid — Create and parse RFC-4122 UUID version 4 identifiers. — MIT
  • sanity-clause — Sanity clause is a data contract and validation library. — LGPLv3
  • seedable-rng — A seedable random number generator. — MIT
  • slot-extra-options — Extra options for slots using MOP. — LGPL-3.0-or-later
  • tailrec — Guaranteed tail call optimization. — LLGPL
  • tfeb-lisp-tools — TFEB.ORG Lisp tools — MIT

Updated projects: algae, april, async-process, black-tie, cepl, cl+ssl, cl-ana, cl-async, cl-change-case, cl-coveralls, cl-data-structures, cl-dbi, cl-fxml, cl-grip, cl-gserver, cl-html-readme, cl-ipfs-api2, cl-kraken, cl-liballegro-nuklear, cl-libusb, cl-patterns, cl-pdf, cl-prevalence, cl-reexport, cl-shlex, cl-smtp, cl-string-generator, cl-threadpool, cl-typesetting, cl-unicode, cl-utils, cl-webkit, cl-yesql, clog, closer-mop, clsql, cmd, common-lisp-jupyter, core, cover, croatoan, datum-comments, defenum, dexador, easy-audio, eclector, fast-websocket, feeder, file-select, flare, float-features, freebsd-sysctl, functional-trees, fxml, geco, gendl, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, hu.dwim.bluez, hu.dwim.common-lisp, hu.dwim.defclass-star, hu.dwim.logger, hu.dwim.quasi-quote, hu.dwim.reiterate, hu.dwim.sdl, hu.dwim.walker, hu.dwim.zlib, hunchenissr, iterate, jingoh, lass, lichat-protocol, linear-programming, lisp-chat, lmdb, magicl, maiden, mailgun,, mcclim, mgl-pax, mito, monomyth, named-read-macros, nodgui, num-utils, numcl, open-location-code, origin, orizuru-orm, osicat, periods, petalisp, plump-sexp, portal, py4cl, py4cl2, qlot, quri, read-as-string, repl-utilities, rpcq, rutils, s-sysdeps, sel, select, serapeum, shared-preferences, sly, spinneret, studio-client, stumpwm, ten, trivia, trivial-clipboard, trivial-features, ttt, uax-15, ucons, umlisp, uncursed, utm-ups, with-contexts, zacl, zippy.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Planet Lisp | 28-Feb-2021 23:36

Eric Timmons: Static Executables with SBCL v2

It's taken me much longer than I hoped, but I finally have a second version of my patches to build static executables tested and ready to go! This set of patches vastly improves upon the first by reducing the amount of compilation needed at the cost of sacrificing a little purity. Additionally I have created a system that automates the process of building a static executable, along with other release related tasks.

At a Glance
  • The new patch set can be found on the static-executable-v2 branch of my SBCL fork or at$VERSION/static-executable-support-v2.patch with a detached signature available at$VERSION/static-executable-support-v2.patch.asc signed with GPG key 0x9ACF6934.
  • You'll definitely want to build SBCL with the :sb-prelink-linkage-table feature (newly added by the patch). You'll probably also want the :sb-linkable-runtime feature (already exists, but the patch also enables it on arm/arm64).
  • The new patch lets you build a static executable with less compilation of Lisp code.
  • The asdf-release-ops system automates the process of building a static executable by tying it into ASDF.
What's New?

If you need a refresher about what static executables are or what use cases they're good for, see my previous post on this topic.

With my previous patch, the only way you could create a static executable was to perform the following steps:

  1. Determine the foreign symbols needed by your code. The easiest way to do this is to compile all your Lisp code and then dump the information from the image.
  2. From that list of foreign symbols, create a C file that contains fills an array with references to those symbols.
  3. Recompile the SBCL core and runtime with this new file, additionally disabling libdl support and linking against your foreign libraries.
  4. (Re)compile all your Lisp code with the new runtime (if you made an image in step 1 it will not be compatible with the new runtime due to feature and build ID mismatches).
  5. Dump the executable.

In the most general case, this involved compiling your entire Lisp image twice. After some #lisp discussions, I realized there was a better way of doing this. While the previous process still works, the new recommended process now looks like:

  1. Build the image you would like to make into a static executable and save it.
  2. Dump the foreign symbol info from this image and write the C file that SBCL can use to prelink itself.
  3. Compile that C file and link it into an existing sbcl.o file to make a new runtime. sbcl.o is the SBCL runtime in object form, created when building with the :sb-linkable-runtime feature.
  4. Load the image from step 1 into your new runtime. It will be compatible because the build ID and feature set are the same!
  5. Dump your now static executable.

This new process can significantly reduce the amount of time needed to make an executable. Plus it lets you take more advantage of image based development. It's fairly trivial to build an image exactly like you want, dump it, and then pair it with a custom static runtime to make a static executable.

There were two primary challenges that needed to be overcome for this version of the patch set.

First, the SBCL core had to be made robust to every libdl function uncondtionally returning an error. Since we want the feature set to remain constant we can't recompile the runtime with #-os-provides-dlopen. Instead, we take advantage of the fact that Musl libc lets you link static executables against libdl, but all those functions are noops. This is the "purity" sacrifice I alluded to above.

Second, since we are reusing a image, the prelink info table (the generated C file) needed to order the symbols exactly as the image expects them to be ordered. The tricky bit here is that some libraries (like cl-plus-ssl) add symbols to the linkage table that will always be undefined. cl-plus-ssl does this in order to support a wide range of openssl versions. The previous patch set unconditionally filtered out undefined symbols, which horribly broke things in the new approach.

More Documentation

As before, after applying the patch you'll find a README.static-executable file in the root of the repo. You'll also find a Dockerfile and an example of how to use it in the README.static-executable.

You can also check out the tests and documentation in the asdf-release-ops system.

Known Issues
  • The :sb-prelink-linkage-table feature does not work on 32-bit ARM + Musl libc >= 1.2. Musl switched to 64-bit time under the hood while still mataining compatibility with everything compiled for 32-bit time.

The issue is how they maintained backwards compatibility. Every time related symbol still exists and implements everything on top of the 32-bit time interface. However, if you include the standard header file where the symbol is defined or you look up the symbol via dlsym you actually get a pointer to the 64-bit time version of the symbol. We can't use dlsym (it doesn't work in static executables). And the generated C file doesn't include any headers.

This could be fixed if someone is motiviated enough to create/find a complete, easy to use map between libc symbols and the headers that define them and integrate it into the prelink info generator.

  • The :sb-prelink-linkage-table works on Windows but causes test failures. The root issue is that mingw64 has implemented their own libm. Their trig functions are fast, but use inaccurate instructions (like FSIN) under the hood. When prelinking these inaccurate implementations are used instead of the more accurate ones (from msvcrt.dll ?) found when using dlsym to look up the symbol.
Next Steps
  1. I would love to get feedback on this approach and any ideas on how to improve it! Please drop me a line (etimmons on Freenode or daewok on Github/Gitlab) if you have suggestions.

  2. I've already incorporated static executables into CLPM and will be distributing them starting with v0.4.0! I'm going to continue rolling out static executables in my other projects.

  3. Pieces of the patch set are now solid enough that I think they can be submitted for upstream consideration. I'll start sending them after the current 2.1.2 freeze.

Planet Lisp | 24-Feb-2021 13:50

Max-Gerd Retzlaff: "Curl/Wget for uLisp"
Or: An HTTP(s) get/post/put function for uLisp

Oh, I forgot to continue posting… I just published a quite comprehensive HTTP function supporting put, post, get, auth, HTTP and HTTPS, and more for uLisp at ulisp-esp-m5stack.

Activate #define enable_http and #define enable_http_keywords to get it; the keywords used by the http function are to be enabled separately as they might be used more general and not just by this function.

Note that you need to connect to the internet first. Usually with WIFI-CONNECT.

Here is the full documentation with example calls:

Syntax: http url &key verbose (https t) auth (user default_username) (password default_password) accept content-type (method :get) data => result-string

Arguments and values:
   verbose---t, or nil (the default); affects also debug output of the argument decoding itself and should be put in first position in a call for full effect.

   https---t (the default), nil, or a certificate as string; uses default certificate in C string root_ca if true; url needs to fit: "http://..." for true and and "https://..." for false.

   auth---t, or nil (the default).

   user---a string, or nil (the default); uses default value in C string default_username if nil; only used if :auth t.

   password---a string, or nil (the default); uses default value in C string default_password if nil; only used if :auth t.

   accept---nil (the default), or a string.

   content-type---nil (the default), or a string.

   method---:get (the default), :put, or :post.

   data---nil (the default), or a string; only necessary in case of :method :put or :method :post; error for :method :get. Examples: ;; HTTP GET: (http "" :https nil) ;; HTTP PUT: (http "" :https nil :accept "application/n-quads" :content-type "application/n-quads" :auth t :user "foo" :password "bar" :method :put :data (format nil " \"~a\" .~%" (get-time)))

It can be tested with an minimal HTTP server simulation using bash and netcat:

while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 2342 -q 1; done
(To test with HTTPS in a similar fashion you can use openssl s_server , as explained, for example, in the article Create a simple HTTPS server with OPENSSL S_SERVER by Joris Visscher on July 22, 2015, but then you need to use certificates.)

See also Again more features for uLisp on M5Stack (ESP32):
time via NTP, lispstring without escaping and more space
, More features for uLisp on M5Stack (ESP32):
flash support, muting of the speaker and backlight control
and uLisp on M5Stack (ESP32).

Read the whole article.

Planet Lisp | 23-Feb-2021 12:37

Max-Gerd Retzlaff: Again more features for uLisp on M5Stack (ESP32)

I just pushed three small things to ulisp-esp-m5stack: Get time via NTP, add optional escape parameter to function lispstring, increased WORKSPACESIZE and SYMBOLTABLESIZE for M5Stack.

Getting time via NTP Enable #define enable_ntptime to get time via NTP. New functions INIT-NTP and GET-TIME . Note that you need to connect to the internet first. Usually with WIFI-CONNECT .
Syntax: INIT-NTP -> nil

Initializes and configures NTP.

Syntax: GET-TIME -> timestamp

Returns a timestamp in the format of xsd:dateTime.

Add optional escape parameter to function lispstring I have changed the function lispstring to have an optional escape parameter to switch off the default behavior of handling the backslash escape character. The default behavior is not changed.

The C function lispstring takes a C char* string and return a uLisp string object. When parsing data in the n-triples format retrieved via HTTP I noticed that the data got modified already by lispstring which broke my parser implemented in uLisp.

As lispstring might be used in other contexts that expect this behavior, I just added the option to switch the un-escaping off.

Increased WORKSPACESIZE and SYMBOLTABLESIZE for M5Stack The M5Stack ESP32 has 320 kB of usable DRAM in total. Although with a lot of restrictions (see next section),

I increased WORKSPACESIZE to 9000 cells, which equals 72,000 bytes, and SYMBOLTABLESIZE to 2048 bytes. These sizes seem to work still safely even with bigger applications and a lot of consing.

Warning: You cannot load images created with different settings!

The SRAM of the M5Stack ESP32 In total the M5Stack ESP32 comes with 520 kB of SRAM. The catch is that the ESP32 is based on the Harvard architecture and 192 kB is in the SRAM0 block intended(!) for instructions (IRAM). There is another 128 kB block in block SRAM1 which can be used either for instructions or data (DRAM). The third block SRAM2 has got a size of 200 kB and is for data only. But 8 kB of SRAM2 is lost for ROM mappings.

The ESP-IDF and thus also the Arduino environment use only SRAM0 for instructions and SRAM1 and SRAM2 for data, which is fine for uLisp as it is an interpreter and therefore more RAM for data is perfect. SRAM0 will just hold the machine code of the uLisp implementation but no code written in the language uLisp.

Of the remaining 320 kB another 54 kB will be dedicated for Bluetooth if Bluetooth is enabled in ESP-IDF (which it is by default, #define CONFIG_BT_RESERVE_DRAM 0xdb5c) in the SRAM2 block. And if trace memory is enabled, another 32 kB of SRAM1 are reserved (by default it is disabled, #define CONFIG_TRACEMEM_RESERVE_DRAM 0x0).

So, by default with Bluetooth enabled and trace memory disabled, 266 kB are left. At the bottom of SRAM2 right after the 62 kB used for Bluetooth and ROM are the application's data and BSS segments. Sadly, at around the border between SRAM1 and SRAM2 there seem to be two small reserved regions again of a bit more the 1 kB each, limiting statically allocated memory.

Thus, the "shared data RAM" segment dram0_0_seg in the linker script memory layout is configured to have a default length of 0x2c200 -; CONFIG_BT_RESERVE_DRAM. That is, 176.5 kB (= 180,736 bytes) without Bluetooth and 121.66 kB (= 124,580 bytes) with Bluetooth enabled.

But actually I have already written more than I have intended for this blog post and the rest of my notes, calculations and experiments will have to wait for a future article. For now, I just increased the size of the statically allocated uLisp workspace to make more use of the available memory of the ESP32 in the M5Stack.

See also More features for uLisp on M5Stack (ESP32) and uLisp on M5Stack (ESP32).

References Espressif Systems, ESP32 Technical Reference Manual , Shanghai, 2020, section 2.3.2 Embedded Memory.

Read the whole article.

Planet Lisp | 16-Feb-2021 20:41

Tycho Garen : Programming in the Common Lisp Ecosystem

I've been writing more and more Common Lips recently and while I reflected a bunch on the experience in a recent post that I recently followed up on .

Why Ecosystems Matter

Most of my thinking and analysis of CL comes down to the ecosystem: the language has some really compelling (and fun!) features, so the question really comes down to the ecosystem. There are two main reasons to care about ecosystems in programming languages:

  • a vibrant ecosystem cuts down the time that an individual developer or team has to spend doing infrastructural work, to get started. Ecosystems provide everything from libraries for common tasks as well as conventions and established patterns for the big fundamental application choices, not to mention things like easily discoverable answers to common problems.

    The more time between "I have an idea" to "I have running (proof-of-concept quality) code running," matters so much. Everything is possible to a point, but the more friction between "idea" and "working prototype" can be a big problem.

  • a bigger and more vibrant ecosystem makes it more tenable for companies/sponsors (of all sizes) to choose to use Common Lisp for various projects, and there's a little bit of a chicken and egg problem here, admittedly. Companies and sponsors want to be confidence that they'll be able to efficiently replace engineers if needed, integrate or lisp components into larger ecosystems, or be able to get support problems. These are all kind of intangible (and reasonable!) and the larger and more vibrant the ecosystem the less risk there is.

    In many ways, recent developments in technology more broadly make lisp slightly more viable, as a result of making it easier to build applications that use multiple languages and tools. Things like microservices, better generic deployment orchestration tools, greater adoption of IDLs (including swagger, thrift and GRPC,) all make language choice less monolithic at the organization level.

Great Things

I've really enjoyed working with a few projects and tools. I'll probably write more about these individually in the near future, but in brief:

  • chanl provides. As a current/recovering Go programmer, this library is very familiar and great to have. In some ways, the API provides a bit more introspection, and flexibility that I've always wanted in Go.
  • lake is a buildsystem tool, in the tradition of make, but with a few additional great features, like target namespacing, a clear definition between "file targets" and "task targets," as well as support for SSH operations, which makes it a reasonable replacement for things like fabric, and other basic deployment tools.
  • cl-docutils provides the basis for a document processing system. I'm particularly partial because I've been using the python (reference) implementation for years, but the implementation is really quite good and quite easy to extend.
  • roswell is really great for getting started with CL, and also for making it possible to test library code against different implementations and versions of the language. I'm a touch iffy on using it to install packages into it's own directory, but it's pretty great.
  • ASDF is the "buildsystem" component of CL, comparable to setuptools in python, and it (particularly the latest versions,) is really great. I like the ability to produce binaries directly from asdf, and the "package-inferred" is a great addition (basically, giving python-style automatic package discovery.)
  • There's a full Apache Thrift implementation. While I'm not presently working on anything that would require a legit RPC protocol, being able to integrate CL components into larger ecosystem, having the option is useful.
  • Hunchensocket adds websockets! Web sockets are a weird little corner of any stack, but it's nice to be able to have the option of being able to do this kind of programming. Also CL seems like a really good platform to do
  • make-hash makes constructing hash tables easier, which is sort of needlessly gawky otherwise.
  • ceramic provides bridges between CL and Electron for delivering desktop applications based on web technologies in CL.

I kept thinking that there wouldn't be good examples of various things, (there's a Kafka driver! there's support for various other Apache ecosystem components,) but there are, and that's great. There's gaps, of course, but fewer, I think, than you'd expect.

The Dark Underbelly

The biggest problem in CL is probably discoverability: lots of folks are building great tools and it's hard to really know about the projects.

I thought about phrasing this as a kind of list of things that would be good for supporting bounties or something of the like. Also if I've missed something, please let me know! I've tried to look for a lot of things, but discovery is hard.

  • rove doesn't seem to work when multi-threaded results effectively. It's listed in the readme, but I was able to write really trivial tests that crashed the test harness.
  • Chanl would be super lovely with some kind of concept of cancellation (like contexts in Go,) and while it's nice to have a bit more thread introspection, given that the threads are somewhat heavier weight, being able to avoid resource leaks seems like a good plan.
  • There doesn't seem to be any library capable of producing YAML formated data. I don't have a specific need, but it'd be nice.
  • it would be nice to have some way of configuring the quicklisp client to be able to prefer quicklisp (stable) but also using ultralisp (or another source) if that's available.
  • Putting the capacity in asdf to produce binaries easily is great, and the only thing missing from buildapp/cl-launch is multi-entry binaries. That'd be swell. It might also be easier as an alternative to have support for some git-style sub-commands in a commandline parser (which doesn't easily exist at the moment'), but one-command-per-binary, seems difficult to manage.
  • there are no available implementations of a multi-reader single-writer mutex, which seems like an oversite, and yet, here we are.
Bigger Projects
  • There are no encoders/decoders for data formats like Apache Parquet, and the protocol buffers implementation don't support proto3. Neither of these are particular deal breakers, but having good tools dealing with common developments, lowers to cost and risk of using CL in more applications.
  • No support for http/2 and therefore gRPC. Having the ability to write software in CL with the knowledge that it'll be able to integrate with other components, is good for the ecosystem.
  • There is no great modern MongoDB driver. There were a couple of early implementations, but there are important changes to the MongoDB protocol. A clearer interface for producing BSON might be useful too.
  • I've looked for libraries and tools to integrate and manage aspects of things like systemd, docker, and k8s. k8s seems easiest to close, as things like cube can be generated from updated swagger definitions, but there's less for the others.
  • Application delievery remains a bit of an open. I'm particularly interested in being able to produce binaries that target other platforms/systems (cross compilation,) but there are a class of problems related to being able to ship tools once built.
  • I'm eagerly waiting and concerned about the plight of the current implementations around the move of ARM to Darwin, in the intermediate term. My sense is that the transition won't be super difficult, but it seems like a thing.

Planet Lisp | 16-Feb-2021 01:00

Max-Gerd Retzlaff: More features for uLisp on M5Stack (ESP32)

I finished the IoT sensor device prototype and shipped it last Thursday. It just has a stub bootstrap system in the flash via uLisp's Lisp Library and downloads the actual application in a second boot phase via HTTPS. More on that later.

To make it happen I've added a bunch of things to ulisp-esp-m5stack: flash support, fixes for some quirks of the M5Stack. time via NTP, an HTTP function supporting methods PUT, POST, GET, Auth, HTTP and HTTP, temperature sensors via one wire, and more. I plan to publish all these features in the next days.

Today you get: flash support, muting of the builtin speaker and control of the LED backlight of the builtin display.

Read the whole article.

Planet Lisp | 15-Feb-2021 22:36

Quicklisp news: Newer Quicklisp client available

 I had to revert the change that allows slashes in dist names for Ultralisp. If your Quicklisp directory has a lot of files and subdirectories (which is normal), the wild-inferiors file search for dist info is unacceptably slow. 

You can get an updated client with the feature reverted with (ql:update-client).

Planet Lisp | 14-Feb-2021 02:59

Quicklisp news: New Quicklisp client available

 I've just published a new version of the Quicklisp client. You can get it with (ql:update-client).

This version updates the fallback ASDF from 2.26 to 3.2.1. (This will not have any effect on any implementation except CLISP, which does not come with ASDF of any version.)

It also includes support for dists with slashes in the name, as published by Ultralisp.

Thanks to those who contributed pull requests incorporated in this update.

Planet Lisp | 12-Feb-2021 00:06

Nicolas Hafner: Setting Up Camp - February Kandria Update

I hope you've all started well into the new year! We're well into production now, with the vertical slice slowly taking shape. Much of the work in January has been on concept and background work, which is now done, so we are moving forward on the implementation of the new features, assets, and writing. This entry will have a lot of pictures and videos to gander at, so be warned!

The vertical slice will include three areas - the central camp, or hub location, the first underground area, and the desert ruins. We're now mostly done implementing the central camp. Doing so was a lot of work, since it requires a lot of unique assets. It still requires a good amount of polish before it can be called well done, but for the vertical slice I think we're good at the point we are now.

The camp is where all the main cast are (Fi, Jack, Catherine, and Alex), and where you'll return to after most missions. As such, it's important that it looks nice, since this is where you'll spend a lot of your time. It also has to look believable and reasonable for the cast to try and live here, so we spent a good amount of time thinking about what buildings there would be, what purpose they should fulfil, and so forth.

We also spent a good deal of time figuring out the visual look. Since Kandria is set quite far into the future, with that future also having undergone a calamity, the buildings both have to look suitably modern for a future society to have built, but at the same time ruined and destroyed, to fit the calamity event.

I also finished the character redesign for Fi. Her previous design no longer really fit with her current character, so I really wanted to get that done.

On the gameplay side the movement AI has been revised to be able to deal with far more complicated scenarios. Characters can now follow you along, move to various points on the map independently, or lead the player to a destination.

Quests now also automatically track your time to completion, which allows us both to do some nice tracking for score and speedrun purposes, but also to implement a 'race' quest. We have a few ideas on those, and it should serve as a nice challenge to try and traverse the various areas as quickly as possible.

We're also thinking of setting up leaderboards or replays for this, but that's gonna have to wait until after the vertical slice.

For look and feel there's also been a bunch of changes. First, there's now a dedicated particle system for effects like explosions, sparks, and so forth. Adding such details really enhances the feel of the combat, and gives a nice, crunchy, oomph to your actions. I still have a few more ideas for additional effects to pile on top, and I'll see that I can get to those in due time.

Also on the combat side, there's now a quick-use menu so you can access your healing items and so forth easily during combat. It even has a nice slow-mo effect!

Since we're not making a procedural game, we do have to have a way of gating off far areas in a way that feels at least somewhat natural. To do this I've implemented a shader effect that renders a sandstorm on top of everything. The strength of the effect can be fine-tuned, so we could also use it for certain setpieces or events.

The effect looks a lot better in-game. Video compression does not take kindly to very noisy and detailed effects like this. Having the sand howl around really adds a lot to the feel of the game. In a similar vein, there's also grass and other foliage that can be placed now, which reacts to the wind and characters stepping on it. You can see that in action in this quick run-down of the camp area:

There's a bunch of other things we can't show off quite yet, especially a bunch of excellent animations by Fred. I haven't had the time to integrate all of those yet!

We've also been thinking more about how to handle the marketing side of things. I'm now doing a weekly screenshotsaturday thing on Twitter, and semi-regularly post quick progress gifs and images as well. Give me a follow if you haven't yet!

Then I took advantage of Rami Ismail's excellent consulting service and had a talk with him about what we should do to improve the first impressions for Kandria and how to handle the general strategy. He gave some really excellent advice, though I wish I had had more time to ask other questions, too! I'll probably schedule a consultancy hour with him later this year to catch up with all of that.

Anyway, I think a lot of the advice he gave us isn't necessarily specific to Kandria, so I thought it would be good to share it here, in case you're a fellow developer, or just interested in marketing in general:

  • Make sure to keep a consistent tone throughout your paragraph or trailer. This means that you want to avoid going back and forth between advertising game features or narrative elements, for instance. In Kandria's case we had a lot of back and forth in our press kit and steam page texts, which we've now gone over and revised to be more consistent.
  • Marketing is as much about attracting as many people as possible as it is about pushing people away. You want to be as efficient as possible at advertising to your target group. This also means being as up-front as possible about what your game is and who it is for, so you immediately pull in the people that would care about it, and push away the people that would not.
  • You need to figure out which part of your game best appeals to your core audience, and how you need to put it to make it attractive. Having an advertisement platform that gives you plenty of statistics and targeting features is tremendously helpful for this. Rami specifically suggested using short Facebook ads, since those can be targeted towards very specific groups. Do many small ads using different copy texts and trailers to see which work the best at attracting people to your Steam page.
  • Always use a call to action at the end of your top of the funnel (exposure) marketing. In fact, don't just use one link, use one for every way people have to interact with your game, if you have several. For us in specific this means I'll now include a link to our mailing list, our discord, and our steam page in our material.
  • Only use community/marketing platforms that you're actually comfortable with engaging with yourself. This means don't force yourself to make a Discord or whatever if you're not going to really engage with it. I'm fairly comfortable with where we are now, though I'm considering also branching out to imgur for more top of the funnel marketing. We'll see.
  • Two years is plenty of time to get marketing going. Generally you want to really up the hype train about three months before release. The wishlist peak about one month before release should give you a rough idea of whether the game is going to be successful or not - 5-10k is good, 15-20k should be very good.
  • Three weeks before release is when you want to start contacting press - write emails to people that have reviewed the games that inspired yours and seem to generally fit the niche you're targeting. Let them know you'll send a final build a week before release.
  • Actually do that exactly a week before release. Ideally your game will be done and you won't fudge with it until after release.
  • On the day before release, log onto and submit your game. Actual journalists don't tend to look there it seems, since they already get way more than enough mail, but third parties and independent people might!

And that's about what we managed to discuss in the 20 minutes we had. As mentioned, I'll probably schedule another consultancy later in the year. I'll be sure to let you know how it went!

Alright, I've run my mouth for long enough now, here's some words from Tim about his experience for January:

It's been a documentation-heavy month for me: designing the vertical slice quests on paper (which will become the first act of the narrative), making some tweaks to the characters and plots to fit the game's pillars, and also tweaking the press kit and marketing copy from Rami's feedback.

The last two weeks I've also started implementing the first quest, reminding myself how to use the scripting language and editor (it's amazing how much you forget after a couple of weeks away from it). This has also involved familiarising myself with the "proper" quest structure, using the hierarchy of quest > task > trigger (for the demo quest it was more like task > trigger, trigger, trigger, etc. you get the idea). What's been most fun though is getting into the headspace for Jack and Catherine, writing their initial dialogues, and threading in some player choice. Catherine is quickly becoming my favourite character.

It's also been great to see the level design and art coming along - Nick's sketched layouts, and now the pixel art for the ruined buildings which he and Fred have been working on. Oh, and seeing the AI in action, with Catherine bounding along after The Stranger blew my mind.

Well, that's about it for this month. It's been exciting to finally see a change in the visuals, and I'm excited to start tackling the first underground area. I see a lot more pixel work ahead of us...

Anyway, in the meantime until the next monthly update, do consider checking out the mailing list if you want more in-depth, weekly updates on things. We cover a lot of stuff there that never makes it into the monthlies, too! If you want to get involved in discussions and feedback around the game, hop onto the discord. We're slowly building a community of fans there, and are trying to post more actively about the process. For a more casual thing, there's also my twitter with plenty of gifs and images. Finally, please do wishlist Kandria on Steam! It might seem like it isn't much, but it really does help out a lot!

Thanks for reading, and see you next time!

Planet Lisp | 08-Feb-2021 16:27

Vsevolod Dyomkin: "Programming Algorithms in Lisp" Is Out!

The updated version of my book "Programming Algorithms" has been released by Apress recently. It has undergone a number of changes that I want to elaborate on in this post.

But first, I'd like to thank all the people who contributed to the book or supported my work on it in other ways. It was an honor for me to be invited to Apress as "Practical Common Lisp" published by them a decade ago was my one-way ticket to the wonderful world of Lisp. Writing "Programming Algorithms" was, in a way, an attempt to give something back. Also, I was very curious to see how the cooperation with the publisher would go. And I can say that they have done a very professional job and helped significantly improve the book through the review process. That 5-10% change that is contributed by the editors, although it may seem insignificant, is very important to bring any book to the high standard that allows not to annoy many people. Unfortunately, I am not a person who can produce a flawless result at once, so helping with correcting those flaws is very valuable. Part of gratitude for that also, surely, goes to many of the readers who have sent their suggestions.

I was very pleased that Michał "phoe" Herda has agreed to become the technical reviewer. He has found a number of bugs and suggested lots of improvements, of which I could implement, maybe, just a third. Perhaps, the rest will go into the second edition :)

Now, let's speak about some of those additions to Programming Algorithms in Lisp.

Curious Fixes

First of all, all the executable code from the book was published in a github repo (and also republished to the oficial Apress repo). As suggested by Michał, I have added automated tests to ensure (for now, partially, but we plan to make the test suite all-encompassing) that everything compiles and runs correctly. Needless to say that some typos and other issues were found in the process. Especially, connected with handling different corner cases. So, if you have trouble running some code from the book, you can use the github version. Funny enough, I got into a similar situation recently, when I tried to utilize the dynamic programming example in writing a small tool for aligning outputs of different ASR systems and found a bug in it. The bug was is in the matrix initialization code:
-    (dotimes (k (1+ (length s1))) (setf (aref ld k 0) 0))
-    (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) 0)))
+    (dotimes (k (1+ (length s1))) (setf (aref ld k 0) k))
+    (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) k)))

Another important fix that originated from the review process touched not only the book but also the implementation of the slice function in RUTILS! It turned out that I was naively assuming that displaced arrays will automatically recursively point into the original array, and thus, inadvertently, created a possibility for O(n) slice performance instead of O(1). It explains the strange performance of array sorting algorithms at the end of Chapter 5. After fixing slice, the measurements started to perfectly resemble the theoretical expectations! And, also the performance has improved an order of magnitude :D
CL-USER> (let ((vec (random-vec 10000)))
           (print-sort-timings "Insertion " 'insertion-sort vec)
           (print-sort-timings "Quick" 'quicksort vec)
           (print-sort-timings "Prod" 'prod-sort vec))
= Insertion sort of random vector (length=10000) =
Evaluation took:
  0.632 seconds of real time
= Insertion sort of sorted vector (length=10000) =
Evaluation took:
  0.000 seconds of real time
= Insertion sort of reverse sorted vector (length=10000) =
Evaluation took:
  1.300 seconds of real time
= Quicksort of random vector (length=10000) =
Evaluation took:
  0.039 seconds of real time
= Quicksort of sorted vector (length=10000) =
Evaluation took:
  1.328 seconds of real time
= Quicksort of reverse sorted vector (length=10000) =
Evaluation took:
  1.128 seconds of real time
= Prodsort of random vector (length=10000) =
Evaluation took:
  0.011 seconds of real time
= Prodsort of sorted vector (length=10000) =
Evaluation took:
  0.011 seconds of real time
= Prodsort of reverse sorted vector (length=10000) =
Evaluation took:
  0.021 seconds of real time

Also, there were some missing or excess closing parens in a few code blocks. This, probably, resulted from incorrectly copying the code from the REPL after finishing experimenting with it. :)

New Additions

I have also added more code to complete the full picture, so to say, in several parts where it was lacking, from the reviewers' point of view. Most new additions went into expanding "In Action" sections where it was possible. Still, unfortunately, some parts remain on the level of general explanation of the solution as it was not possible to include whole libraries of code into the book. You can see a couple of snippets below:

Binary Search in Action: a Fast Specialized In-Memory DB

We can outline the operation of such a datastore with the following key structures and functions.

A dictionary *dict* will be used to map words to numeric codes. (We'll discuss hash-tables that are employed for such dictionaries several chapters later. For now, it will be sufficient to say that we can get the index of a word in our dictionary with (rtl:? *dict* word)). The number of entries in the dictionary will be around 1 million.

All the ngrams will be stored alphabetically sorted in 2-gigabyte files with the following naming scheme: ngram-rank-i.bin. rank is the ngram word count (we were specifically using ngrams of ranks from 1 to 5) and i is the sequence number of the file. The contents of the files will constitute the alternating ngram indices and their frequencies. The index for each ngram will be a vector of 32-bit integers with the length equal to the rank of an ngram. Each element of this vector will represent the index of the word in *dict*. The frequency will also be a 32-bit integer.

All these files will be read into memory. As the structure of the file is regular — each ngram corresponds to a block of (1+ rank) 32-bit integers — it can be treated as a large vector.

For each file, we know the codes of the first and last ngrams. Based on this, the top-level index will be created to facilitate efficiently locating the file that contains a particular ngram.

Next, binary search will be performed directly on the contents of the selected file. The only difference with regular binary search is that the comparisons need to be performed rank times: for each 32-bit code.

A simplified version of the main function get-freq intended to retrieve the ngram frequency for ranks 2-5 will look something like this:
(defun get-freq (ngram)
  (rt:with ((rank (length ngram))
            (codes (ngram-codes ngram))
            (vec index found?
                 (bin-search codes
                             (ngrams-vec rank codes)
                             :less 'codes<
                             :test 'ngram=)))
     (if found?
         (aref vec rank)

(defun ngram-codes (ngram)
  (map-vec (lambda (word) (rtl:? *dict* word))

(defun ngrams-vec (rank codes)
  (loop :for ((codes1 codes2) ngrams-vec) :across *ngrams-index*
        :when (and (                   (codes< codes codes2 :when= t))
        :do (return ngrams-vec)))
(defun codes< (codes1 codes2 &key when=)
  (dotimes (i (length codes1)
              ;; this will be returned when all
              ;; corresponding elements of codes are equal
    (cond ((< (aref codes1 i)
              (aref codes2 i))
           (return t))
          ((> (aref codes1 i)
              (aref codes2 i))
           (return nil)))))

(defun ngram= (block1 block2)
  (let ((rank (1- (length block1))))
    (every '= (rtl:slice block1 0 rank)
              (rtl:slice block2 0 rank)))

We assume that the *ngrams-index* array containing a pair of pairs of codes for the first and last ngram in the file and the ngrams data from the file itself was already initialized. This array should be sorted by the codes of the first ngram in the pair. A significant drawback of the original version of this program was that it took quite some time to read all the files (tens of gigabytes) from disk. During this operation, which measured in several dozens of minutes, the application was not responsive. This created a serious bottleneck in the system as a whole and complicated updates, as well as put normal operation at additional risk. The solution we utilized to counteract this issue was a common one for such cases: switching to lazy loading using the Unix mmap facility. With this approach, the bounding ngram codes for each file should be precalculated and stored as metadata, to initialize the *ngrams-index* before loading the data itself.

Pagerank MapReduce Explanation
;; this function will be executed by mapper workers
(defun pr1 (node n p &key (d 0.85))
  (let ((pr (make-arrray n :initial-element 0))
        (m (hash-table-count (node-children node))))
    (rtl:dokv (j child (node-children node))
      (setf (aref pr j) (* d (/ p m))))

(defun pagerank-mr (g &key (d 0.85) (repeat 100))
  (rtl:with ((n (length (nodes g)))
             (pr (make-arrray n :initial-element (/ 1 n))))
    (loop :repeat repeat :do
      (setf pr (map 'vector (lambda (x)
                              (- 1 (/ d n)))
                    (reduce 'vec+ (map 'vector (lambda (node p)
                                                 (pr1 node n p :d d))
                                       (nodes g)

Here, we have used the standard Lisp map and reduce functions, but a map-reduce framework will provide replacement functions which, behind-the-scenes, will orchestrate parallel execution of the provided code. We will talk a bit more about map-reduce and see such a framework in the last chapter of this book.

One more thing to note is that the latter approach differs from the original version in that each mapper operates independently on an isolated version of the pr vector, and thus the execution of Pagerank on the subsequent nodes during a single iteration will see an older input value p. However, since the algorithm is stochastic and the order of calculations is not deterministic, this is acceptable: it may impact only the speed of convergence (and hence the number of iterations needed) but not the final result.

Other Significant Changes

My decision to heavily rely on syntactic utilities from my RUTILS library was a controversial one, from the start. And, surely, I understood it. But my motivation, in this regard, always was and still remains not self-promotion but a desire to present Lisp code so that it didn't seem cumbersome, old-fashioned, or cryptic (and, thankfully, the language provides all possibilities to tune its surface look to your preferences). However, as it bugged so many people, including the reviewers, for the new edition, we have come to a compromise to use all RUTILS code only qualified with the rtl prefix so that it was apparent. Besides, I have changed some of the minor purely convenience abbreviations to their standard counterparts (like returning funcall instead of call).

Finally, the change that I regret the most, but understand that it was inevitable, is the change of title and the new cover, which is in standard Apress style. However, they have preserved the Draco tree in the top right corner. And it's like a window through which you can glance at the original book :)  

So, that is an update on the status of the book.

For those who were waiting for the Apress release to come out, it's your chance to get it. The price is quite affordable. Basically, the same as the one I asked for (individual shipping via post is a huge expense).

And for those who have already gotten the original version of the book, all the major changes and fixes are listed in the post. Please, take notice if you had any issues.

I hope the book turns out to be useful to the Lisp community and serves both Lisp old-timers and newcomers.

Planet Lisp | 08-Feb-2021 11:48

Max-Gerd Retzlaff: StumpWM: vsplit-three

A good ten month ago I switched away from a full desktop environment being finally tired enough that user software gets more and ever more features and tries to anticipate more and more what I might want but in the end my own computer never actually does what I want and only that. PulseAudio being the most dreaded example of a piece of code that gets more, more and more magic and complexity and in the end it never does what you actually want while at the same time telling it to do so got completely impossible because of many layers of abstractions and magic. PulseAudio has a lot of "rules", "profiles", "device intended roles", "autodetecting", "automatic setup and routing" and "other housekeeping actions". Look at this article PulseAudio under the hood by Victor Gaydov (which is also the source of the terms I just quoted): It has 174 occurrences of words starting with "auto-": automatically – 106, automatic – 27, autoload – 16, autospawn – 14, autodetect – 4, autoexit – 2, automate – 2, auto timing – 1, auto switch – 1, and once "magically" when it is even too much for the author.

So, more control and less clutter instead. After years again I use just a good old window manager, individual programs, and got rid of PulseAudio.

I switched to StumpWM which is written in Common Lisp. It is easy to modify and try stuff. While it's running. I have it run Slime so that I can connect to it from Emacs and hack stuff that is missing. From time to time I got StumpWM hanging while hacking, so I added a signal handler for POSIX signal SIGHUP to force a hard stumpwm restart. (There is a new version of that signal handler without the CFFI dependency but that pull request is not merged yet.) When I did something stupid I switch to a console, fire a killall -HUP stumpwm to have it reset hard. Since then I haven't lost a X11 session even while changing quite a bit.

Read the whole article.

Planet Lisp | 07-Feb-2021 01:19

Jonathan Godbout: Proto Cache: Flags and Hooks
Today’s Updates

Last week we made our Pub/Sub application use protocol buffer objects for most of its internal state. This week we'll take advantage of that change by setting startup and shutdown hooks to load state and save state respectively. We will add flags so someone starting up our application can set the load and save files on the command line. We will then package our application into an executable with a new asdf command.

Code Changes Proto-cache.lisp Defpackage Updates:

We will use ace.core.hook to implement our load and exit hooks. We will show how to make methods that will run at load and exit time when we use this library in the code below. In the defpackage we use the nickname hook. The library is available in the ace.core repository.

We use ace.flag as our command line flag parsing library. This is a command line flag library used extensively at Google for our lisp executables. The library can be found in the ace.flag repository.

Flag definitions:

We define three command line flags:

  • flag::*load-file*
  • flag::*save-file*
  • flag::*new-subscriber* 
    • This flag is used for testing purposes. It should be removed in the future.
  • flag::*help*

The definitions all look the same, we will look at flag::*load-file* as an example:

(flag:define flag::*load-file* "" "Specifies the file from which to load the PROTO-CACHE on start up." :type string)
  • We use the flag:define macro to define a flag. Please see the code for complete documentation of this macro ( update coming). We only use a small subset of the ace.flag package.
  • flag::*load-file*: This is the global where the parsed command line flag will be stored.
  • The documentation string to document the flag. If flag:print-help is called this documentation will be printed:

    --load-file (Determines the file to load PROTO-CACHE from on startup)

     Type: STRING

  • :type : The type of the flag. Here we have a string.

We use the symbol-name string of the global in lowercase as the command line input. 

For example:

  1. flag::*load-file* becomes --load-file
  2. flag::*load_file* becomes –load_file

The :name or :names key in the flag:define macro will let users select their own names for the command line input instead of this default.

Main definition:

We want to create a binary for our application. Since we have no way to add publishers and subscribers outside of the repl we define a dummy main that adds publishers and subscribers for us:

(defun main () (register-publisher "pika" "chu") (register-subscriber "pika" flag::*new-subscriber*) (update-publisher-any "pika" "chu" (google:make-any :type-url "a")) ;; Sleep to make sure running threads exit. (sleep 2))

After running the application we can check for a new subscriber URL in the saved proto-cache application state file. I will show this shortly.

Load/Exit hooks:

We have several pre-made hooks defined in ace.core.hook. Two useful functions are ace.core.hook:at-restart and ace.core.hook:at-exit. As one can imagine, at-restart runs when the lisp image starts up, and at-exit runs when the lisp image is about to exit.

The first thing we do when we start our application is parse our command line:

(defmethod hook::at-restart parse-command-line () "Parse the command line flags." (flag:parse-command-line) (when flag::*help* (flag:print-help)))

You MUST call flag:parse-command-line for the defined command line flags to have non default values.

We also print a help menu  if --help was passed in.

Then we can load our proto if the load-file flag was passed in:

(defmethod hook::at-restart load-proto-cache :after parse-command-line () "Load the command line specified file at startup." (when (string/= flag::*load-file* "") (load-state-from-file :filename flag::*load-file*)))

We see an :after clause in our defmethod. We want the load-proto-cache method called during start-up but after we have parsed the command line so flag::*load-file* has been properly set. 

Note: The defmethod here uses a special defmethod syntax added in ace.core.hook. Please see the hook-method documentation for complete details.

Finally we save our image state at exit:

(defmethod hook::at-exit save-proto-cache () "Save the command line specified file at exit." (when (string/= flag::*save-file* "") (save-state-to-file :filename flag::*save-file*)))

The attentive reader will notice our main function never explicitly called any of these hook functions...


We add code to build an executable using asdf:

(defpackage :proto-cache ... :build-operation "program-op" :build-pathname "proto-cache" :entry-point "proto-cache:main")

This is a program-op. The executable pathname is relative, we save the binary as "proto-cache" in the same directory as our proto-cache code. The entry point function is proto-cache:main.

We may then call: 

sbcl --eval "(asdf:operate :build-op :proto-cache)"

at the command line to create our binary.

Running our binary:

With our binary built we can call:

./proto-cache --save-file /tmp/first.proto --new-subscriber

Trying cat /tmp/first.pb:

pika' a?pika"chujg

These are serialized values so one shouldn't try to understand the output so much. We can see "", "pika", and "chu" are all saved.


./proto-cache   --load-file /tmp/first.pb --save-file /tmp/first.pb --new-subscriber

And then cat /tmp/first.pb:

I pikaA ? a?pika"chujg "

Finally calling  ./proto-cache  --help

We get:

Flags from ace.flag: --lisp-global-flags (When provided, allows specifying global and special variables as a flag on the command line. The values are NIL - for none, :external - for package external, and T - for all flags.) Type: ACE.FLAG::GLOBAL-FLAGS --help (Whether to print help) Type: BOOLEAN Value: T --load-file (Determines the file to load PROTO-CACHE from on startup) Type: STRING Value: "" --new-subscriber (URL for a new subscriber, just for testing) Type: STRING Value: "" --lisp-normalize-flags (When non-nil the parsed flags will be transformed into a normalized form. The normalized form contains hyphens in place of underscores, trims '*' characters, and puts the name into lower case for flags names longer than one character.) Type: BOOLEAN --save-file (Determines the file to save PROTO-CACHE from on shutdown) Type: STRING Value: ""

This shows our provided documentation of the command line flags as expected.


Today we added command line flags, load and exit hooks, and made our application buildable as an executable. We can build our executable and distribute it as we see fit. We can direct it to load and save the application state to user specified files without updating the code. There is still much to do before it’s done but this is slowly becoming a usable application.

There are a few additions I would like to make, but I have a second child coming soon. This may (or may not) be my last technical blog post for quite some time. I hope this sequence of Proto Cache posts has been useful thus far, and I hope to have more in the future.

Thanks to Ron Gut and Carl Gay for copious edits and comments.

Planet Lisp | 03-Feb-2021 19:30

ECL News: ECL 21.2.1 release

Dear Community,

We are announcing a new stable ECL release which fixes a number of bugs from the previous release. Changes made include amongst others

  • working generational and precise garbage collector modes
  • support for using precompiled headers to improve compilation speed
  • the bytecompiler correctly implements the ANSI specification for load time forms of literal objects in compiled files
  • fixes for encoding issues when reading in the output of the MSVC compiler
  • issues preventing ECL from compiling on Xcode 12 and running on ARM64 versions of Mac OS have been rectified

More detailed information can be obtained from the CHANGELOG file and git commit logs. We'd like to thank all people who contributed to this release. Some of them are listed here (without any particular order): Paul Ruetz, Karsten Poeck, Eric Timmons, Vladimir Sedach, Dima Pasechnik, Matthias Köppe, Yuri Lensky, Tobias Hansen, Pritam Baral, Marius Gerbershagen and Daniel Kochmański.

This release is available for download in a form of a source code archive (we do not ship prebuilt binaries):

Happy Hacking,
The ECL Developers

Planet Lisp | 01-Feb-2021 01:00

RSS and Atom feeds and forum posts belong to their respective owners.