lisp

News

Jonathan Godbout: Proto Cache: Saving State
Todays Updates:

In our last post we implemented a basic Pub Sub application that stores an Any protocol buffer message and a list of subscribers. When the Any protocol buffer message gets updated we send the new Any message in the body of an http request to all of the subscribers in the subscribe-list. 

Today we will update our service to save all of the state in a protocol buffer message. We will also add functionality to save and load the state of the Proto Cache application. 

Note: Viewing the previous post is highly suggested!

Code Updates:

Note: We use red to denote removed code and green to denote added code.

pub-sub-details.proto `syntax = proto3`

We will use proto3 syntax. I've yet to find a great reason to choose proto3 over proto2, but I've also yet to find a great reason to choose proto2 over proto3. The biggest reason to choose proto3 over proto2 is that most people use proto3, but the Any proto will store proto2 or proto3 messages regardless.

import “any.proto”

Our users are publishing Any messages to their clients, so we must store them in our application state. This requires us to include the any.proto file in our proto file.

message PubSubDetails

This contains (almost) all of the state needed for the publish subscribe service for one user:

  • repeated string subscriber_list
  • google.protobuf.Any current_message
    • This is the latest Any message that the publisher has stored in the Proto Cache.
  • string username
  • string password
    • For any kind of production use this should be salted and hashed. 
message PubSubDetailsCache

This message contains one entry, a map from a string (which will be a username for a publisher) to a PubSubDetails instance. The attentive reader will notice that we save the username twice, once in the PubSubDetails message and once in the PubSubDetailsCache map as the key. This will be explained when we discuss changes to the proto-cache.lisp file.

proto-cache.asd

The only difference in proto-cache.asd from all of the other asd files we've seen using protocol buffers is the use of a protocol buffer message in a package different from our current package. That is, any.proto resides in the cl-protobufs package but we are including it in the pub-sub-details.proto file in proto-cache.

To allow the protoc compiler to find the any.proto file we give it a :proto-search-path containing the path to the any.proto file. 

... :components ((:protobuf-source-file "pub-sub-details" :proto-pathname "pub-sub-details.proto" :proto-search-path ("../cl-protobufs/google/protobuf/")) ...

Note: We use a relative path: “../cl-protobufs/google/protobuf/”, which may not work for you. Please adjust to reflect your set-up.

We don't need a component in our defsystem to load the any.proto file into our lisp image since it's already loaded by cl-protobufs. We might want to just to recognize the direct dependency of the any.proto file. 

proto-cache.lisp Defpackage updates:

We are adding new user invokable functionality so we export:

  • save-state-to-file
  • load-state-from-file

local-nicknames:

  • cl-protobufs.pub-sub-details as psd
    • This is merely to save typing. The cl-protobufs.pub-sub-details is the package that contains the functionality derived from pub-sub-details.proto.
Globals:

*cache*: This will be a protocol buffer message containing a hash table with string keys and pub-sub-details messages. 

(defvar *cache* (make-hash-table :test 'equal)) (defvar *cache* (psd:make-pub-sub-details-cache))

*mutex-for-pub-sub-details*: Protocol buffer messages can't store lisp mutexes. Instead, we store the mutex for a pub-sub-details in a new hash-table with string (username) keys.

make-pub-sub-details:

This function makes a psd:pub-sub-details protocol buffer message. It's almost the same as the previous iteration of pub-sub-details except for the addition of username.

... (make-instance 'pub-sub-details :password password)) (psd:make-pub-sub-details :username username :password password :current-any (google:make-any)) ... (defmethod (setf psd:current-any) (new-value (psd psd:pub-sub-details ))

This is really a family of functions:

  • :around: When someone tries to set the current-message value on a pub-sub-details struct we want to write-protect the pub-sub-details entry. We use an around method which activates before any call to the psd:current-any setter. Here we take the username from the pub-sub-details message and write-hold the corresponding mutex in the *mutex-for-pub-sub-details* global hash-table. Then we call call-next-method which will call the main (setf current-any) method.
(defmethod (setf current-any) (new-value (psd pub-sub-details)) (defmethod (setf psd:current-any) :around (new-value (psd psd:pub-sub-details))
  • (setf psd:current-any): This is the actual defmethod defined in cl-protobufs.pub-sub-details. It sets the current-messaeg slot on the message struct.
  • :after: This occurs after the current-any setter was called. We send an http call to all of the subscribers on the pub-sub-details subscriber list. Minus the addition of the psd package prefix to accessor functions of pub-sub-details this function wasn't changed.
 register-publisher:

The main differences between the last iteration of proto-cache and this one are:

  1. This *-gethash method is exported by cl-protobufs.pub-sub-details so the user can call gethash on the hash-table in a map field of a protocol buffer message.
    • (gethash username *cache*)
    • (psd:pub-sub-cache-gethash username *cache*)
  2. We add a mutex to the *mutex-for-pub-sub-details* hash-table with the key being the username string sent to register-publisher.
  3. We return t if the new user was registered successfully, nil otherwise.
register-subscriber and update-publisher-any:
  1. The main difference here is:
    1. (gethash publisher *cache*)
    2. (psd:pub-sub-cache-gethash publisher *cache*)
  2. We have to use the psd package prefix to all of the accessors to pub-sub-details. 
save-state-to-file: (defun save-state-to-file (&key (filename "/tmp/proto-cache.txt")) "Save the current state of the proto cache to *cache* global to FILENAME as a serialized protocol buffer message." (act:with-frmutex-read (*cache-mutex*) (with-open-file (stream filename :direction :output :element-type '(unsigned-byte 8)) (cl-protobufs:serialize-to-stream stream *cache*))))

This is a function that accepts a filename as a string, opens the file for output, and calls cl-protobufs:serialize-to-stream. This is all we need to do to save the state of our applications!

load-state-from-file:

We need to do three things:

  1. Open a file for reading and deserialize the Proto Cache state saved by save-sate-to-file
  2. Create a new map containing the mutexes for each username.
  3. Set the new state into the *cache* global and the new mutex hash-table in *mutex-for-pub-sub-details*.
    1. We do write-hold the *cache-mutex* but I would suggest only loading the saved state when Proto Cache is started.
(defun load-state-from-file (&key (filename "/tmp/proto-cache.txt")) "Load the saved *cache* globals from FILENAME. Also creates all of the fr-mutexes that should be in *mutex-for-pub-sub-details*." (let ((new-cache (with-open-file (stream filename :element-type '(unsigned-byte 8)) (cl-protobufs:deserialize-from-stream 'psd:pub-sub-details-cache :stream stream))) (new-mutex-for-pub-sub-details (make-hash-table :test 'equal))) (loop for key being the hash-keys of (psd:pub-sub-cache new-cache) do (setf (gethash key new-mutex-for-pub-sub-details) (act:make-frmutex))) (act:with-frmutex-write (*cache-mutex*) (setf *mutex-for-pub-sub-details* new-mutex-for-pub-sub-details *cache* new-cache)))) Conclusion:

The main update we made today was defining pub-sub-details in a .proto file instead of a Common Lisp defclass form. The biggest downside is the requirement to save the pub-sub-details mutex in a separate hash-table. For this cost, we:

  1. Gained the ability to save our application state with one call to cl-protobufs:serialize-to-stream.
  2. Gained the ability to load our application with little more then one call to cl-protobufs:deserialize-from-stream.

We were also able to utilize the setf methods defined in cl-protobufs to create :around and :after methods.

Note: Nearly all services will be amenable to storing their state in protocol buffer messages.

I hope the reader has gained some insight into how they can use cl-protobufs in their application even if their application doesn't make http-requests. Being able to save the state of a running program and load it for later use is very important in most applications, and protocol buffers make this task simple.

Thank you for reading!

Thanks to Ron, Carl, and Ben for edits!


Planet Lisp | 26-Jan-2021 04:57

Quicklisp news: January 2021 Quicklisp dist update now available

 New projects

  • astonish — A small library for querying and manipulating Lisp ASTs — GPLv3
  • cl-dejavu — Repack of DejaVu fonts for Common Lisp — CC0-1.0 (fonts have a separate license)
  • cl-fxml — cl-fxml: Common Lisp - Finally eXtended Markup Language. — MIT
  • cl-zstd — Zstandard (de)compression using bindings to libzstd — GPL-3
  • clog — The Common Lisp Omnificent GUI — BSD
  • core — Make Interactive-Server-Side-Rendered web pages with declaritive and recursive programming. This is the core functionality is reusable for all server modules. — LLGPL
  • definer — A DEF macro for Common Lisp. The DEFINER library adds a simple macro DEF to Common Lisp that replaces the various 'def*' forms in the language. It is a simple hack, but it adds some elegance to the language. Of course, it comes with its own way to be extended. — BSD
  • hunchenissr — Make Interactive-Server-Side-Rendered web pages with declaritive and recursive programming. — LLGPL
  • inheriting-readers — Provides a simple yet powerful value inheritance scheme. — Unlicense
  • mailgun — A thin wrapper to post HTML emails through mailgun.com — Unlicense
  • portal — Portable websockets. — LLGPL
  • quicklisp-stats — Fetches and operates on Quicklisp download statistics. — MIT
  • shared-preferences — Notably allows flexible specification of package-local preferences. — Unlicense
  • tclcs-code — Companion code for "The Common Lisp Condition System" — MIT
  • unicly — UUID Generation per RFC 4122 — MIT
  • wallstreetflets — Wall Street FLETs: A library for calculating Options Greeks — GPL v3

Updated projects: access, algae, anaphora, anypool, april, architecture.builder-protocol, atomics, bp, cepl, chanl, cl+ssl, cl-ansi-text, cl-collider, cl-data-structures, cl-fad, cl-fastcgi, cl-gobject-introspection, cl-gserver, cl-ipfs-api2, cl-kraken, cl-liballegro-nuklear, cl-marklogic, cl-mixed, cl-mssql, cl-ssdb, cl-str, cl-telegram-bot, cl-unicode, cl-utils, cl-wave-file-writer, cl-webkit, clack-pretend, clast, clath, closer-mop, cmd, coleslaw, common-lisp-jupyter, concrete-syntax-tree, conium, croatoan, cytoscape-clj, damn-fast-priority-queue, data-lens, djula, easy-audio, fast-generic-functions, flac-metadata, functional-trees, fuzzy-match, garbage-pools, gendl, golden-utils, graph, gtwiwtg, harmony, helambdap, house, ironclad, kekule-clj, lichat-protocol, local-time, markup, math, mcclim, mgl-pax, mutility, named-readtables, nibbles, numpy-file-format, ook, opticl, origin, parachute, paren6, parsley, patchwork, petalisp, phoe-toolbox, picl, plump, pngload, portable-condition-system, postmodern, qlot, read-number, rpcq, sb-cga, sb-fastcgi, sel, select, serapeum, shadow, sheeple, sly, stripe, terminfo, trace-db, trivia, trivial-gray-streams, trivial-mmap, trucler, uax-15, ucons, umbra, uncursed, unix-opts, varjo, vgplot, with-contexts, xhtmlambda, zpb-exif, zpb-ttf.

Removed projects: flac-parser, gamebox-dgen, gamebox-ecs, gamebox-frame-manager, genie, shorty, simple-logger.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!


Planet Lisp | 25-Jan-2021 01:35

Eric Timmons: Common Lisp Docker Images

As alluded to in my previous post, I've been working on a set of Docker images for Common Lisp. The latest version of that effort is now finally live! Check it out at https://common-lisp.net/project/cl-docker-images/. Many thanks are also due to the Common Lisp Foundation for hosting the images on their Docker Hub org!

I've been building Docker images for Common Lisp going on five years now. They were originally hosted on my personal account (daewok) on both Github and Docker Hub. If you use the daewok images, please migrate over to these new ones at your earliest convenience. I plan to stop updating the daewok images later this year.

My original images were... not super good. The images were too big, used the entrypoint incorrectly, had out of date pieces, etc. But over the years I finally grokked Docker best practices and (IMO) they are now a set of high quality images.

I use them regularly for CI/CD purposes. They work very well with Gitlab CI and I also use them locally to make repeatable builds of CL software. I also know that they're in use by others in the community (largely for CI from what I can tell).

There are several other CL related Docker images out there. But, as far as I can tell, none of them are updated as regularly as these, support as many implementations, nor run on as many CPU architectures.

As part of this project, there are currently two classes of images published: "implementation specific" and "development".

The "implementation specific" images are meant to include only a single CL implementation. The default variant for these images is "fat" and has many commonly used OS packages and programs. They are mostly built off the buildpack-deps image (a common base image for programming lanugagues). There are also "slim" variants that include only the CL implementation.

The "development" image is geared toward local interactive development. It is a kitchen-sink image that includes every implementation the project builds an implementation specific image for, as well as all the OS packages needed to load ~every system in Quicklisp, the latest version of ASDF, etc.

The last piece of this project is slime-docker, an Emacs package that automates the starting of CL processes in Docker containers for use with SLIME.

If you'd like to see improvements or additions, would like another implementation to be supported, or have an idea for another category of image, please join in on this project!


Planet Lisp | 24-Jan-2021 05:00

Jonathan Godbout: Proto Cache: Implementing Basic Pub Sub
Today's Updates

In our last post we saw some of the features of the ace.core.defun and ace.core.thread libraries by creating a thread-safe cache of the Any protocol buffer object. Today we are going to update the proto-cache repository to implement publisher/subscriber features. This will allow a publisher to publish a feed of Any messages and a subscriber to subscribe to such a  feed. 

It is expected (but not required) that the reader has read the previous post Proto Cache: A Caching Story. That post details some of the functions and objects you will see in today’s code.

Note: This is a basic implementation, not one ready for production use. This will serve as our working project going forward.

Code Updates Proto-cache.asd

We want subscribers to be able to get new versions of an Any protocol buffer message. On the web, the usual way to receive messages is over HTTP. We use the Drakma HTTP client. You can see we added :drakma to the depends-on list in the defsystem.

Proto-cache.lisp

There are three major regions to this code. The first region is the global objects that make up the cache. The second is the definition of a new class, pub-sub-details. Finally the actual publisher-subscriber functions are at the bottom of the page.

Global objects:

The global objects section looks much like it did in our previous post. We update the *cache* hash-table to use equal as its test function and we are going to make the keys to this cache be username strings.

Pub-sub-details class:

The global objects section looks much like it did in our previous post. We update the *cache* hash-table to use equal as its test function and we are going to make the keys to this cache be username strings.

The pub-sub-details class contains the data we need to keep track of the publisher and subscriber features:

  • subscriber-list: This will be a list of the HTTP endpoints to send the Any messages to after the Any message is updated. Currently, we only allow for an HTTP message string. Future implementations should allow for security functionality on those endpoints.
  • current-any: The current Any message that the publisher has supplied.
  • mutex: A fr-mutex to protect the current-any slot. This should be read-held to get the current-any and it should be read-held to set a new current-any message.
  • password: The password for the subscriber held as a string. 

We shouldn't be saving the password being as a string in the pub-sub-details class. At a minimum we should be salting and hashing this value. In the future we should implement an account system for readers and subscribers giving access to reading and updating the pub-sub-details. As this is only instructional and not production-ready code, I feel okay leaving it as is for the moment.

We create a make-pub-sub-details function that will create a pub-sub-details object with a given password. The register function doesn't allow the user to set an Any message at creation time, and none of the other slots are useful to the publisher.

We create an accessor method to set the any-message value slot. We also create an :after method to send the Any message to any listening subscribers by iterating through the subscriber list and calling a drakma:http-request. We wrap this in unwind-protect so an IO failure doesn't stop other subscribers from getting the message.

Finally we add a setter function for the subscriber list.

Function definitions : Register-publisher:

This function is the registration point for a new publisher. It is almost the same as set-in-cache from our previous post except it checks that an entry in the cache for the soon-to-be-registered publisher doesn't already exist. It would be bad to let a new publisher overwrite an existing publisher.

Register-subscriber:

Here we use a new macro, ace.core.etc:clet from the etc package in ace.core.

(defun register-subscriber (publisher address) "Register a new subscriber to a publisher." (ace:clet ((ps-struct (act:with-frmutex-read (*cache-mutex*) (gethash publisher *cache*))) (ps-mutex (mutex ps-struct))) (act:with-frmutex-write (ps-mutex) (push address (subscriber-list ps-struct)))))

In the code below we search the cache for a user entry, if the entry is found then ps-struct will be non-nil and we can evaluate the body adding the subscriber to the list. If the subscriber is not found we return nil.

Update-publisher-any: (defun update-publisher-any (username password any) "Updates the google:any message for a publisher with a specified username and password. The actual subscriber calls happen in a separate thread but 'T is returned to the user to indicate the any was truly updated." (ace:clet ((ps-class (act:with-frmutex-read (*cache-mutex*) (gethash username *cache*))) (correct-password (string= (password ps-class) password))) (declare (ignore correct-password)) (act:make-thread (lambda (ps-class) (setf (current-any ps-class) any)) :arguments (list ps-class)) t))

In the update-publisher-any code we use clet to verify that the publisher exists and the password is found. We ignore the correct-password entry though.

We don't want the publisher to be thread-blocked while we send the new message to all of the subscribers so we update the current-any in a separate thread. To do this we use the ace.core.thread function make-thread. A keen reader will see for SBCL this calls the sbcl make-thread function, otherwise it calls bordeaux-threads make-thread function.

If we are able to find a publisher with the correct password we return T to show success.

Conclusion

In today’s post we have made a basic publisher-subscriber library that will send an Any protocol buffer message to a list of subscribers. We have detailed some new functions that we used in ace.core. We have also listed some of the problems with this library. The code has evolved substantially from the previous post but it still has a long way to go before being production-ready.

Thank you for your reading!

Ron Gut, Carl Gay, and Ben Kuehnert gave comments and edits to this post.


Planet Lisp | 20-Jan-2021 17:26

Tycho Garen : Learning Common Lisp Again

In a recent post I spoke about abandoning a previous project that had gone off the rails, and I've been doing more work in Common Lisp, and I wanted to report a bit more, with some recent developments. There's a lot of writing about learning to program for the first time, and a fair amount of writing about lisp itself, neither are particularly relevant to me, and I suspect there may be others who might find themselves in a similar position in the future.

My Starting Point

I already know how to program, and have a decent understanding of how to build and connect software components. I've been writing a lot of Go (Lang) for the last 4 years, and wrote rather a lot of Python before that. I'm an emacs user, and I use a Common Lisp window manager, so I've always found myself writing little bits of lisp here and there, but it never quite felt like I could do anything of consequence in Lisp, despite thinking that Lisp is really cool and that I wanted to write more.

My goals and rational are reasonably simple:

  • I'm always building little tools to support the way that I use computers, nothing is particularly complex, but it'd enjoy being able to do this in CL rather than in other languages, mostly because I think it'd be nice to not do that in the same languages that I work in professionally. [1]
  • Common Lisp is really cool, and I think it'd be good if it were more widely used, and I think by writing more of it and writing posts like this is probably the best way to make that happen.
  • Learning new things is always good, and I think having a personal project to learn something new will be a good way of stretching my self as a developer. Most of my development as a programmer has focused on
  • Common Lisp has a bunch of features that I really like in a programming language: real threads, easy to run/produce static binaries, (almost) reasonable encapsulation/isolation features.
On Learning

Knowing how to program makes learning how to program easier: broadly speaking programming languages are similar to each other, and if you have a good model for the kinds of constructs and abstractions that are common in software, then learning a new language is just about learning the new syntax and learning a bit more about new idioms and figuring out how different language features can make it easier to solve problems that have been difficult in other languages.

In a lot of ways, if you already feel confident and fluent in a programming language, learning a second language, is really about teaching yourself how to learn a new language, which you can then apply to all future languages as needed.

Except realistically, "third languages" aren't super common: it's hard to get to the same level of fluency that you have with earlier languages, and often we learn "third-and-later" languages are learned in the context of some existing code base or project4, so it's hard to generalize our familiarity outside of that context.

It's also the case that it's often pretty easy to learn a language enough to be able to perform common or familiar tasks, but fluency is hard, particularly in different idioms. Using CL as an excuse to do kinds of programming that I have more limited experience with: web programming, GUI programming, using different kinds of databases.

My usual method for learning a new programming language is to write a program of moderate complexity and size but in a problem space that I know pretty well. This makes it possible to gain familiarity, and map concepts that I understand to new concepts, while working on a well understood project. In short, I'm left to focus exclusively on "how do I do this?" type-problems and not "is this possible," or "what should I do?" type-problems.

Conclusion

The more I think about it, the more I realize that when we talk about "knowing a programming language," inevitably linked to a specific kind of programming: the kind of Lisp that I've been writing has skewed toward the object oriented end of the lisp spectrum with less functional bits than perhaps average. I'm also still a bit green when it comes to macros.

There are kinds of programs that I don't really have much experience writing:

  • GUI things,
  • the front-half of the web stack, [2]
  • processing/working with ASTs, (lint tools, etc.)
  • lower-level kind of runtime implementation.

There's lots of new things to learn, and new areas to explore!

Notes [1]There are a few reasons for this. Mostly, I think in a lot of cases, it's right to choose programming languages that are well known (Python, Java+JVM friends, and JavaScript), easy to learn (Go), and fit in with existing ecosystems (which vary a bit by domain,) so while it might the be right choice it's a bit limiting. It's also the case that putting some boundaries/context switching between personal projects and work projects could be helpful in improving quality of life. [2]Because it's 2020, I've done a lot of work on "web apps," but most of my work has been focused on areas of applications including including data layer, application architecture, and core business logic, and reliability/observability areas, and less with anything material to rendering web-pages. Most projects have a lot of work to be done, and I have no real regrets, but it does mean there's plenty to learn. I wrote an earlier post about the problems of the concept of "full-stack engineering" which feels relevant.

Planet Lisp | 18-Jan-2021 01:00

Alexander Artemenko: declt

This is the documentation builder behind Quickref site. It is good for generating API references for third party libraries.

Most interesting features of Declt are:

  • Declt uses Texinfo file format for intermediate document store. This makes it possible to generate not only HTML but also PDF and other output formats.
  • It can automatically include license text into the documentation. But this works only for a number of popular licenses like MIT, BSD, GPL, LGPL and BOOST.

As always, I've created a template project, ready to be used:

https://github.com/cl-doc-systems/declt

Here is how it is rendered in HTML:

https://cl-doc-systems.github.io/declt/

And in PDF:

https://cl-doc-systems.github.io/declt/index.pdf

Sadly, Declt does not support markup in docstrings and cross-referencing does not work there.

Some other pros and cons are listed on example site.

Remember, all example projects from https://github.com/cl-doc-systems include a build script and GitHub Action to update documentation on every commit!


Planet Lisp | 17-Jan-2021 14:58

Jonathan Godbout: Proto Cache: A Caching Story
What is Proto-Cache?

I've been working internally at Google to open source several libraries including cl-protobufs and a series of utility libraries we call "ace". I wrote several blog posts making an HTTP server that takes in either protocol buffers or JSON strings and responds in kind. I think I have worked enough on Mortgage Server and wish to work on a different project.

Proto-cache will grow up to be a pub-sub system that takes in google.protobuf:any protos and send them to users over http requests. I'm developing it to showcase the ace.core library and the Any proto well-known-type. In this post we create a cache system which stores google.protobuf.any messages in a hash-table keyed off of a symbol.

The current incarnation of Proto Cache:

The code can be found here: https://github.com/Slids/proto-cache

Proto-cache.asd:

This is remarkable in-as-much as cl-protobufs isn't required for the defsystem! It's not required at all, but we do require the cl-protobufs.google.protobuf:any protocol buffer message object. Right now we are only adding and getting it from the cache. This allows us to store a protocol buffer message object that any user system can parse by calling unpack-any. We never have to understand the message inside.

Proto-cache.lisp:

The actual implementation. We give three different functions:

  • get-from-cache
  • set-in-cache
  • remove-from-cache

We also have a:

  • fast-read mutex
  • hash-table

Note: The ace.core library can be found at: https://github.com/cybersurf/ace.core

Fast-read mutex (fr-mutex):

The first interesting thing to note is the fast-read mutex. This can be found in the ace.core.thread package included in the ace.core utility library. This allows for mutex free reads of a protected region of code. One has to call:

  • (with-frmutex-read (fr-mutex) body)
  • (with-frmutex-write (fr-mutex) body)

If the body of with-frmutex-read is finished with nobody calling with-frmutex-write then the value is returned. If someone calls with-frmutex-write while another thread is in with-frmutex-read then the body of with-frmutex-read has to be re-run. One should be careful to not modify state in the with-frmutex-read body.

Discussion About the Individual Functions get-from-cache: (acd:defun* get-from-cache (key) "Get the any message from cache with KEY." (declare (acd:self (symbol) google:any)) (act:with-frmutex-read (cache-mutex) (gethash key cache)))


This function uses the defun* form from ace.core.defun. It looks the same as a standard defun except has a new declare statement. The declare statement takes the form

(declare (acd:self (lambda-list-type-declarations) output-declaration))

In this function we state that the input KEY must be a symbol and the return value is going to be a google:any protobuf message. The output declaration is optional. For all of the options please see the macro definition for ace.core.defun:defun*.

The with-fr-mutex-read macro is also being used.

Note in the macro's body we only do a simple accessor call into a hash-table. Safety is not guaranteed, only consistency.

set-in-cache: (acd:defun* set-in-cache (key any) "Set the ANY message in cache with KEY." (declare (acd:self (symbol google:any) google:any)) (act:with-frmutex-write (cache-mutex) (setf (gethash key cache) any)))

We see that the new defun* call is used. In this case we have two inputs, KEY will be a symbol ANY will be a google:any proto message. We also see that we will return a google:any proto message.

The with-frmutex-write macro is being used. The only thing that is done in the body is setting a cache value. If we try to get a message from the cache and set a message into the cache, it is possible a reader will have to read multiple times. In systems where readers are more common than writers fr-mutexes and spinlocking are much faster than having readers lock a mutex for every read..

remove-from-cache:

We omit this function in this write-up for brevity.

Conclusion:

Fast-read mutexes like the one found in ace.core.thread are incredibly useful tools. Having to access a mutex can be slow even in cases where that mutex is never locked. I believe this is one of the more useful additions in the ace.core library.

The new defun* macro found in ace.core.defun for creating function definitions is more mixed. I find a lack of clarity in mapping the lambda list s-expression in the defun statement to the s-expression in the declaration. Others may find it provides nicer syntax and the clarity is more obvious.

Future posts will show the use of the any protocol buffer message.

As usual Carl Gay gave copious edits and suggestions.


Planet Lisp | 12-Jan-2021 22:22

Eric Timmons: Static Executables with SBCL

Common Lisp is an amazing language with many great implementations. The image based development paradigm vastly increases developer productivity and enjoyment. However, there frequently comes a time in a program's life cycle where development pauses and a version must be delivered for use by non-developers. There are many tools available to build an executable in Common Lisp, most of which follow the theme of "construct a Lisp image in memory, then dump it to disk for later reloading". That being said, none of the existing methods fit 100% of my use cases, so this post is dedicated to documenting how I filled the gap by convincing SBCL to generate completely static executables.

Background

There are a variety of reasons to want static executables, but the most common ones I run into personally are:

  1. I want to archive my executables. I want to have a version of my executables saved that I can dig up at any point in the future, long after I've upgraded my OS (multiple times), and run for benchmarking purposes, to test if old versions exhibited specific behavior, etc. without needing to recompile.
  2. I want to enable someone to reproduce my results exactly. This is important for reproducibility in academic contexts. Also, some computing contests that conferences organize prefer static executables so they can run tests on their hardware without needing to set up a complicated run time environment.
  3. I want to make it trivial for someone to install my software. With a static executable, all anyone running on Linux needs to do is download a single file, chmod +x it, and copy it onto their path (preferably after verifying its integrity, but, let's be honest, fewer people do that than should).

There certainly are issues with static executables/linking in general. If you are unaware of what they are, I highly encourage you to read up on the subject before deciding that static executables are the be-all-end-all of application delivery. Static executables are just another tool in a developer's toolbox, to be pulled out only when the time is right.

I'll pause at the moment for a clarification: when I say static executable I mean a truly static executable. As in I want to be able to run ldd on it and have it output not a dynamic executable and I do not want it to call any libdl functions (such as dlopen or dlsym at runtime). While some existing methods claim or imply that they make static executables with SBCL (such as CFFI's static-program-op or manually linking external libraries into the SBCL runtime while building it), they by and large mean they statically link foreign code into the runtime, but the runtime itself is not a static executable.

I have yet to find a publicly documented method of creating a fully static executable with SBCL and it's not too hard to understand why. Creating a static executable requires statically linking in libc and the most common libc implementation for Linux (glibc) does a half-assed job at statically linking itself. While it is possible, many functions will cause your "static" executable to dynamically load pieces of glibc behind your back. Except now you have the requirement that the runtime version must match the compiled version exactly. That defeats the whole point of having a static executable!

For that reason, musl libc is commonly used when creating a truly static executable is important. Unfortunately, musl is not 100% compatible with glibc and for a while SBCL would not work with it. There have been various efforts at patching SBCL to run with musl libc throughout the years, but the assorted (minor!) changes finally got merged upstream in SBCL 2.0.5. This laid the groundwork necessary for truly static executables with SBCL.

Patches

Enough with the blabber, show me the code!

I am maintaining a fork of SBCL that contains the necessary patches. There is a static-executable branch which will always contain the latest version. I plan to rebase this branch on new SBCL releases or on top of upstream's master branch if it looks like I'm going to need to do some extra legwork for an upcoming release. There will also be a series of branches named static-executable-$VERSION which have my patches applied on top of the named version, starting with SBCL 2.1.0.

The patch for any SBCL release is also located at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support.patch. There is a detached signature available at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support.patch.asc signed with GPG key 0x9ACF6934.

I would love to get these patches upstreamed, but they didn't get much traction the last time I submitted them to sbcl-devel. Admittedly, they were an early, less elegant version that hadn't seen much use in the real-world. My hope is that other people who desire this capability from SBCL will collaborate to test and refine these patches over time for eventual upstreaming.

Quickstart

Given that most people aren't using musl libc on their development computer, the quickest, easiest way to get a static executable is to build one with Docker. After getting the patchset, simply run the following set of commands in the root of the SBCL repo. This will use the clfoundation/sbcl:alpine3.12 Docker image (another project of mine for a future post) to build a static executable and then copy it out of the image to your host's file system.

docker build -t sbcl-static-executable -f tools-for-build/Dockerfile.static-executable-example . docker create --name sbcl-static-executable-extractor sbcl-static-executable docker cp sbcl-static-executable-extractor:/tmp/sb-gmp-tester /tmp/sb-gmp-tester docker rm sbcl-static-executable-extractor

You should now be able to examine /tmp/sb-gmp-tester to see that it is a static executable:

$ ldd /tmp/sb-gmp-tester not a dynamic executable

If all goes well, you should also be able to run it, see the sb-gmp contrib tests all pass (fingers crossed), and realize that this worked because libc, the SBCL runtime, and libgmp were all statically linked!

The file README.static-executable (after applying the patchset) has an example of building locally and a set of docker commands that doesn't require tagging images and naming containers.

How does it work??

This approach requires that the target image be be built twice: once to record the necessary foreign symbols, and then again with the newly built static runtime. I can, however, envision ways around this for a sufficiently motivated person.

One way could be to modify the (already in-tree) shrinkwrapping recipe to handle libdl not being available at runtime. I abandoned this approach largely because the shrinkwrapping code is written for x86-64 and does a lot of things with assembly (which I do not know). It is important for me to have static executables on ARM as well. A second way could be to patch out or otherwise improve the check that the runtime version used to build the core matches the runtime version used to run it. I didn't go this approach as it would certainly lead to difficult to debug issues if used incorrectly, plus the Lisp code in the core would need to check the presence/usefulness of libdl functions at runtime.

So, how does this patchset work and why does it require two passes? Apologies to the SBCL devs if I completely butcher the explanation of SBCL internals, but here it goes anyways!

Lisp code routinely calls into C code, whether it is to a runtime provided function, a libc function, or another library the user has linked and defined using the sb-alien package or the portable counterparts in CFFI. In order to mediate these calls from the Lisp side, SBCL maintains a linkage table. This table has two components. First is a Lisp-side hash table that maps foreign names (and an indicator of if it is data or a function) to an integer. The second is a C-side vector that contains either the address of the symbol (in the case of data) or the opcodes necessary to call the function (e.g., by JMPing to its address).

The C-side vector is populated by looking up the symbol's address using dlsym. This lookup generally happens under two possible scenarios. First, when the Lisp code defines a foreign symbol it wants to be able to call or read. Second, every time the runtime starts, it populates the C-side entries for every symbol contained in the core's hash-table. This second case is how SBCL handles the dynamic linker changing the address of symbols in between core dumps.

This reliance on dlopen and dlsym is so baked into SBCL at this point that, even though the code is nominally conditioned on the internal feature :os-provides-dlopen, I was unable to build a working SBCL without it (before these patches, of course).

With these patches, you first build your Lisp image that you want to deliver like normal. Then, you load the file tools-for-build/dump-linkage-info.lisp into it. Next, you call sb-dump-linkage-info:dump-to-file to extract the Lisp side linkage table entries into a separate file (filtered to remove functions from libdl). Once you have this file, you rebuild SBCL, this time with the intention of creating a static runtime. To do this, you should provide the following:

  • The environment variable LINKFLAGS should contain -no-pie -static in order to build the static runtime.
  • Any additional libraries you need should be specified using the environment variable LDLIBS.
  • You probably want to set the environment variable IGNORE_CONTRIB_FAILURES to yes.
  • You need to pass the file containing the linkage table entries to make.sh using the --extra-linkage-table-entries argument.
  • Build without the :os-provides-dlopen and :os-provides-dladdr features. One way of doing this is to pass --without-os-provides-dlopen and --without-os-provides-dladdr to make.sh.

During the build process, the contents of the --extra-linkage-table-entries file are inserted into the cold SBCL core during second genesis and a C file is autogenerated containing a single function that populates the C side of the linkage table using the address of every symbol. This C file is the built into the runtime and called while the runtime boots, before it starts executing the core. This means that, if the runtime is a dynamic executable, the system linker will patch up all the references we need at runtime without SBCL needing to call dlsym explicitly. If the runtime is a static executable, then the symbols are statically linked for us and nothing needs to be done at runtime.

Issues

Given how new this approach is, you will certainly run into issues. Many systems that load foreign code will blindly assume that libraries can be linked in at runtime and will fail to work (silently or loudly) if that assumption is not met. Some libraries already have their own homebrew ways of dealing with this. For instance, if the feature :cl+ssl-foreign-libs-already-loaded is present, the cl+ssl system will not attempt to load the libraries. To deal with this issue in a more principled way, I strongly recommend patching systems to use CFFI's (relatively) new canary argument to define-foreign-library.

CFFI itself also has some issues with this arrangement because it dives into some sb-alien internals that simply aren't present on #-os-provides-dlopen. I currently fix this in a kludgy way by commenting out most of %close-foreign-library in src/cffi-sbcl.lisp, but if more people start building static executables, we'll need to come up with a better way of handling it.

Next Steps

I would love to get feedback on this approach and any ideas on how to improve it! I strongly believe that better support for building static executables with SBCL should be upstreamed and I doubt I am alone in that belief. Please drop me a line (etimmons on Freenode or daewok on Github/Gitlab) if you have suggestions.

Personally, I have used earlier iterations of these patches to build static executables for some of my grad school work. My next real deployment of these patches will likely be to build CLPM with them and providing static executables starting with v0.4.


Planet Lisp | 05-Jan-2021 05:00

Eric Timmons: Hello, World!

I've been meaning to start a technical blog for, oh, the last N years or so, but never got around to it. I finally decided that this was going to be the year for it to happen.

Don't be surprised if the look and feel changes significantly over the next couple of months. I was planning to release this at the end of the month, but a recent conversation on #lisp aligned with my planned first post so I decided to use that as a forcing function and get this out the door sooner rather than later!

I'll populate this site with some more of my projects over time. In a nutshell, I am a PhD student in the MIT/WHOI (Woods Hole Oceanographic Institution) Joint Program. My home department at MIT is EECS and I work on automated planning and execution, with an eye on deploying these technologies on Autonomous Underwater Vehicles. I use Common Lisp for a significant amount of my work.

My various handles are:

  • etimmons on Freenode
  • daewok on github.com
  • daewok on gitlab.com
  • etimmons on gitlab.common-lisp.net

Planet Lisp | 05-Jan-2021 04:00

Micha? Herda: TIL that Common Lisp dynamic variables can be made locally unbound
;;; let's first define a global variable... CL-USER> (defvar *foo* 42) *FOO* ;;; ...and then make a binding without a value using PROGV CL-USER> (progv '(*foo*) '() (print *foo*)) debugger invoked on a UNBOUND-VARIABLE in thread #: The variable *FOO* is unbound. Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL. restarts (invokable by number or by possibly-abbreviated name): 0: [CONTINUE ] Retry using *FOO*. 1: [USE-VALUE ] Use specified value. 2: [STORE-VALUE] Set specified value and use it. 3: [ABORT ] Exit debugger, returning to top level. ((LAMBDA ())) source: (PRINT *FOO*) 0] ; look ma, locally unbound!

Planet Lisp | 04-Jan-2021 12:08

Timofei Shatrov: Ichiran@home 2021: the ultimate guide

Recently I've been contacted by several people who wanted to use my Japanese text segmenter Ichiran in their own projects. This is not surprising since it's vastly superior to Mecab and similar software, and is occassionally updated with new vocabulary unlike many other segmenters. Ichiran powers ichi.moe which is a very cool webapp that helped literally dozens of people learn Japanese.

A big obstacle towards the adoption of Ichiran is the fact that it's written in Common Lisp and people who want to use it are often unfamiliar with this language. To fix this issue, I'm now providing a way to build Ichiran as a command line utility, which could then be called as a subprocess by scripts in other languages.

This is a master post how to get Ichiran installed and how to use it for people who don't know any Common Lisp at all. I'm providing instructions for Linux (Ubuntu) and Windows, I haven't tested whether it works on other operating systems but it probably should.

PostgreSQL

Ichiran uses a PostgreSQL database as a source for its vocabulary and other things. On Linux install postgresql using your preferred package manager. On Windows use the official installer. You should remember the password for the postgres user, or create a new user if you know how to do it.

Download the latest release of Ichiran database. On the release page there are commands needed to restore the dump. On Windows they don’t really work, instead try to create database and restore the dump using pgAdmin (which is usually installed together with Postgres). Right-click on PostgreSQL/Databases/postgres and select “Query tool…”. Paste the following into Query editor and hit the Execute button.

CREATE DATABASE [database_name] WITH TEMPLATE = template0 OWNER = postgres ENCODING = 'UTF8' LC_COLLATE = 'Japanese_Japan.932' LC_CTYPE = 'Japanese_Japan.932' TABLESPACE = pg_default CONNECTION LIMIT = -1;

Then refresh the Databases folder and you should see your new database. Right-click on it then select “Restore”, then choose the file that you downloaded (it wants “.backup” extension by default so choose “Format: All files” if you can’t find the file).

You might get a bunch of errors when restoring the dump saying that “user ichiran doesn’t exist”. Just ignore them.

SBCL

Ichiran uses SBCL to run its Common Lisp code. You can download Windows binaries for SBCL 2.0.0 from the official site, and on Linux you can use the package manager, or also use binaries from the official site although they might be incompatible with your operating system.

However you really want the latest version 2.1.0, especially on Windows for uh… reasons. There’s a workaround for Windows 10 though, so if you don’t mind turning on that option, you can stick with SBCL 2.0.0 really.

After installing some version of SBCL (SBCL requires SBCL to compile itself), download the source code of the latest version and let’s get to business.

On Linux it should be easy, just run

sh make.sh --fancy sudo sh install.sh

in the source directory.

On Windows it’s somewhat harder. Install MSYS2, then run “MSYS2 MinGW 64-bit”.

pacman -S mingw-w64-x86_64-toolchain make # for paths in MSYS2 replace drive prefix C:/ by /c/ and so on cd [path_to_sbcl_source] export PATH="$PATH:[directory_where_sbcl.exe_is_currently]" # check that you can run sbcl from command line now # type (sb-ext:quit) to quit sbcl sh make.sh --fancy unset SBCL_HOME INSTALL_ROOT=/c/sbcl sh install.sh

Then edit Windows environment variables so that PATH contains c:\sbcl\bin and SBCL_HOME is c:\sbcl\lib\sbcl (replace c:\sbcl here and in INSTALL_ROOT with another directory if applicable). Check that you can run a normal Windows shell (cmd) and run sbcl from it.

Quicklisp

Quicklisp is a library manager for Common Lisp. You’ll need it to install the dependencies of Ichiran. Download quicklisp.lisp from the official site and run the following command:

sbcl --load /path/to/quicklisp.lisp

In SBCL shell execute the following commands:

(quicklisp-quickstart:install) (ql:add-to-init-file) (sb-ext:quit)

This will ensure quicklisp is loaded every time SBCL starts.

Ichiran

Find the directory ~/quicklisp/local-projects (%USERPROFILE%\quicklisp\local-projects on Windows) and git clone Ichiran source code into it. It is possible to place it into an arbitrary directory, but that requires configuring ASDF, while ~/quicklisp/local-projects/ should work out of the box, as should ~/common-lisp/ but I’m not sure about Windows equivalent for this one.

Ichiran wouldn’t load without settings.lisp file which you might notice is absent from the repository. Instead, there’s a settings.lisp.template file. Copy settings.lisp.template to settings.lisp and edit the following values in settings.lisp:

  • *connection* this is the main database connection. It is a list of at least 4 elements: database name, database user (usually “postgres”), database password and database host (“localhost”). It can be followed by options like :port 5434 if the database is running on a non-standard port.
  • *connections* is an optional parameter, if you want to switch between several databases. You can probably ignore it.
  • *jmdict-data* this should be a path to these files from JMdict project. They contain descriptions of parts of speech etc.
  • ignore all the other parameters, they’re only needed for creating the database from scratch

Run sbcl. You should now be able to load Ichiran with

(ql:quickload :ichiran)

On the first run, run the following command. It should also be run after downloading a new database dump and updating Ichiran code, as it fixes various issues with the original JMdict data.

(ichiran/mnt:add-errata)

Run the test suite with

(ichiran/test:run-all-tests)

If not all tests pass, you did something wrong! If none of the tests pass, check that you configured the database connection correctly. If all tests pass, you have a working installation of Ichiran. Congratulations!

Some commands that can be used in Ichiran:

  • (ichiran:romanize "一覧は最高だぞ" :with-info t) this is basically a text-only equivalent of ichi.moe, everyone’s favorite webapp based on Ichiran.
  • (ichiran/dict:simple-segment "一覧は最高だぞ") returns a list of WORD-INFO objects which contain a lot of interesting data which is available through “accessor functions”. For example (mapcar 'ichiran/dict:word-info-text (ichiran/dict:simple-segment "一覧は最高だぞ") will return a list of separate words in a sentence.
  • (ichiran/dict:dict-segment "一覧は最高だぞ" :limit 5) like simple-segment but returns top 5 segmentations.
  • (ichiran/dict:word-info-from-text "一覧") gets a WORD-INFO object for a specific word.
  • ichiran/dict:word-info-str converts a WORD-INFO object to a human-readable string.
  • ichiran/dict:word-info-gloss-json converts a WORD-INFO object into a “json” “object” containing dictionary information about a word, which is not really JSON but an equivalent Lisp representation of it. But, it can be converted into a real JSON string with jsown:to-json function. Putting it all together, the following code will convert the word 一覧 into a JSON string:
(jsown:to-json (ichiran/dict:word-info-json (ichiran/dict:word-info-from-text "一覧")))

Now, if you’re not familiar with Common Lisp all this stuff might seem confusing. Which is where ichiran-cli comes in, a brand new Command Line Interface to Ichiran.

ichiran-cli

ichiran-cli is just a simple command-line application that can be called by scripts just like mecab and its ilk. The main difference is that it must be built by the user, who has already did the previous steps of the Ichiran installation process. It needs to access the postgres database and the connection settings from settings.lisp are currently “baked in” during the build. It also contains a cache of some database references, so modifying the database (i.e. updating to a newer database dump) without also rebuilding ichiran-cli is highly inadvisable.

The build process is very easy. Just run sbcl and execute the following commands:

(ql:quickload :ichiran/cli) (ichiran/cli:build)

sbcl should exit at this point, and you’ll have a new ichiran-cli (ichiran-cli.exe on Windows) executable in ichiran source directory. If sbcl didn’t exit, try deleting the old ichiran-cli and do it again, it seems that on Linux sbcl sometimes can’t overwrite this file for some reason.

Use -h option to show how to use this tool. There will be more options in the future but at the time of this post, it prints out the following:

>ichiran-cli -h Command line interface for Ichiran Usage: ichiran-cli [-h|--help] [-e|--eval] [-i|--with-info] [-f|--full] [input] Available options: -h, --help print this help text -e, --eval evaluate arbitrary expression and print the result -i, --with-info print dictionary info -f, --full full split info (as JSON) By default calls ichiran:romanize, other options change this behavior

Here’s the example usage of these switches

  • ichiran-cli "一覧は最高だぞ" just prints out the romanization
  • ichiran-cli -i "一覧は最高だぞ" - equivalent of ichiran:romanize :with-info t above
  • ichiran-cli -f "一覧は最高だぞ" - outputs the full result of segmentation as JSON. This is the one you’ll probably want to use in scripts etc.
  • ichiran-cli -e "(+ 1 2 3)" - execute arbitrary Common Lisp code… yup that’s right. Since this is a new feature, I don’t know yet which commands people really want, so this option can be used to execute any command such as those listed in the previous section.

By the way, as I mentioned before, on Windows SBCL prior to 2.1.0 doesn’t parse non-ascii command line arguments correctly. Which is why I had to include a section about building a newer version of SBCL. However if you use Windows 10, there’s a workaround that avoids having to build SBCL 2.1.0. Open “Language Settings”, find a link to “Administrative language settings”, click on “Change system locale…”, and turn on “Beta: Use Unicode UTF-8 for worldwide language support”. Then reboot your computer. Voila, everything will work now. At least in regards to SBCL. I can’t guarantee that other command line apps which use locales will work after that.

That’s it for now, hope you enjoy playing around with Ichiran in this new year. よろしくおねがいします!


Planet Lisp | 04-Jan-2021 07:12

Alexander Artemenko: atdoc

This is yet another documentation builder for CL.

Its unique features are a special markup and ability to render not only HTML, but also PDF and Info files.

As always, I've created an example project which can be used as a template for your own library. Here how it is rendered in HTML:

https://cl-doc-systems.github.io/atdoc/

This is how it is rendered into PDF:

Beautiful. Isn't it?

More pros and cons of ATDOC are listed in the repository's README:

https://github.com/cl-doc-systems/atdoc

Remember, all example projects from https://github.com/cl-doc-systems include a build script and GitHub Action to update documentation on every commit!


Planet Lisp | 31-Dec-2020 20:45

Nicolas Hafner: 2020 for Kandria in Review - Gamedev


Well, 2020 has certainly been a year. Given the amount of stuff that's happened, and especially the big changes in my life around Kandria, I thought it would be interesting to write up a review on the entire year. I'm not going to go month by month, but rather just give an overview on the many things that happened and how I feel about it all, so don't be surprised if I jump between things a little bit.

With that said, I want to start this out by thanking everyone for their support throughout the year. It's been really nice to see people interested in the project! I really hope that we can deliver on a good game, though it is going to take a long time still to get there. I hope you can wait for a couple more years!

A year ago Kandria still had its prototype name "Leaf", and I had just gotten done with a redesign of the main character, The Stranger. Much of the visual style of the game had already been defined by then, though, including the shadows. Most of the UI toolkit, Alloy, was also standing at that point. I think it was also then that I decided to do public monthly updates on the project.

I'm glad that I started on that pretty early, as I got a few eyes on the project pretty soon after I had posted things on Gamedev.net. There's a lot more that needs to be done in terms of outreach and marketing, though. Since the Steam launch we've been thinking a lot about how to get a bigger community together and foster active discussion surrounding the project. For now I'll keep doing the monthly summaries and weekly updates on the mailing list. I'll also try to be more active on Twitter and the Discord, but other than that we don't have a solid strategy yet.

The Steam launch and everything with Pro Helvetia leading up to that was a pretty stressful time all in all, when I was already running on fumes from everything else that had been going on. I'm really glad that I decided to afford myself these two weeks of holidays just to get away from it all. I didn't succeed entirely - I've been thinking about Kandria every day in at least some fashion - but I have been working on other projects at least, and been spending a lot of time just playing games, too, so I think I'm at least getting my mind cleared up enough to start fresh into the year next week.

On the topic of Pro Helvetia, the story there began in February, when the Swiss Game Hub had a little presentation on the organisation and its grant programme. With a little push from fellow local devs I decided to take the step and try to apply. This in turn forced a lot of changes as I decided to finally "properly go public". This meant finding a real name, creating a website and trailer, as well as a publicly playable demo, and mailing list to manage the marketing. And of course, polishing everything to actually run on other systems. I also got the Steam app at that point, with the idea of using it for testing distribution, but I only really got that sorted out after the grant submission deadline.

When I applied at Pro Helvetia I didn't expect to get the grant - and as expected, I didn't get it either. However, when we applied for the Swiss Games showcase in November, I did think we had a pretty good shot at it. Getting the message that we were, once again, rejected just two weeks before Christmas was pretty crushing, especially after all the work and rush that went into squeezing out a new trailer, new demo, Steam page, and press kit in time for it. Worst of all though, we weren't given any reason as to why others were selected over Kandria. I've tried contacting them the day after to ask for feedback, but have not heard back from them.

I've never been a confident person, so getting these rejections has been wearing down my already feeble remaining amounts of confidence, which hasn't been great for morale. While I'm not a confident person, I am however a very stubborn person, so despite everything I'm still determined to see this through to the end. Worst comes to worst I'll have to finish it on my own, but even if that came to pass I'd still do it. This is the best shot I've ever had at getting a real game made, and I'm not going to give up on it.

Moving on from these more rough sides of development, there has been a lot of progress this year, though a lot of it was in the innards of the game, and not necessarily on the visible side. That pains me a bit, since the screenshots from a year ago look very similar to the ones from today. I have to keep in mind that even without this, the progress made is necessary and valuable. Anyway, on to what I did do.

I reworked the SteamWorks library to work properly again. I rewrote the sound system stack almost entirely from scratch to allow for more complex effects and to work properly on all platforms. Large parts of the engine had to be rewritten to fix some big issues in how resources and rendering used to be organised. Not directly part of the game, but still important, I made custom mailing list and feedback systems. Hopefully there will be less things like that that I need to do next year, so there's more time for the actual game.

On the side of visible progress, most of it has been surrounding the combat system, and starting on upping the pizzazz by introducing fancy effects and post processing. There's still a lot more to do in that department though. Especially combat needs to have a lot more flare to it - explosions should kick and spray particles around, slashes need to connect visibly, getting hit has to really impact. I've looked at some other games and how they handle combat, and it really does seem like a much larger part than one might think of how the combat feels depends entirely on how many effects there are piled on. Sparks, flashes, particles, and especially crunchy sound effects make an enormous difference.

Don't get me wrong though, the animations of the characters themselves are also very important. They have to be fluid and have visible weight that is being thrown around. I struggled tremendously with that when I started out with the combat in Spring and had to do the first animations myself. I'm very glad that I've recruited Fred to take care of that part, as he's done an amazing job at it. The new animations feel a lot more fun, fluid, and real.

Speaking of Fred, one of the biggest changes this year was that I finally decided to put not only my time, but also my money on the line and actually hire some people to expand the team. This is something that was a long time coming. I always knew when I started out that I'd have to eventually expand the team, simply because the scale would require it to get it done in a reasonable amount of time, and because I simply don't trust my own skills well enough to get a great product out of them. That's where the confidence thing comes in again.

The hiring process took an entire month of my time, mostly because there were way more applications than I ever thought there would be, and I wanted to do my due diligence and investigate everyone to a good degree. Ultimately finalising the selection was also difficult for me, and took me over a week of deliberation. I'm happy with the choices I made, but I still wish I had the funds to just hire more people.

Since the game is almost entirely built on a custom stack of software, engine and all, there's a lot of rough edges and corner case bugs that hinder development and cost us a lot of time. I really wish I had the funds to hire another skilled programmer to take care of those so I can focus more on directing the story, art, and general features and level design. Still, we're already on a tight budget that isn't going to last for the entire duration of development unless we can procure additional funding somehow. We've been talking about that a fair bit, too, but there's no clear decision yet.

So far the plan is still to complete a vertical slice in the coming months and then do another planning session to see how things hash out once we have a better idea of the development costs involved and how the overall plot and world will pan out. Then comes another application for the Pro Helvetia grant in September. If we get that, we'll have extended funds for another year, which should hopefully bridge the gap well enough to pull through to the end. If not... well, there's other possibilities that I don't want to really discuss yet as it's all still too uncertain.

As you may know, during most of the development of Kandria so far I was a Master's student at ETH. I've been a student for a long time, since my Bachelor's took me a long time to complete, largely in part to not being able to take the stress of taking on too many subjects at once. Most of the classes I either didn't care for, or outright loathed having to work on, so it was not a very merry time. Still, I managed to persevere. Now, in the Master's programme for Computer Science at ETH there's a requirement to complete two of three "interdisciplinary laboratories". You have to complete these regardless of the focus you take, and so regardless of your interests or target skillset. I tried all three, and failed all three, the last two of which I failed this Summer. All three were very hard courses that required a ton of time investment. I did not expect to fail them all. Whatever the case, this, in addition with the strict term limits at ETH, meant that it was not guaranteed I'd be able to complete my Master's even if I did decide to try them again in a year. It would mean spending at least one and a half more years to complete my Master's, if I managed to pass these classes the second time.

I decided that these odds were no longer worth it. University made me miserable, and I was not sure how big of a benefit the degree would be anyway. So I made the big decision to work full time on Kandria, which I have now been doing since September.

Doing this also shifted the project quite a bit though, as now it is no longer a game project I just want to complete on the side, it's now something that has to prove not only possible, but also financially viable, in order to be able to keep doing this. Naturally this places a huge burden on me, and even if I don't want to think about it much, my subconscious still does anyway. This has lead to a somewhat unhealthy work/life balance, where I couldn't justify working on other side projects like I used to all this time before, as the thought of "but shouldn't you be working on the game, instead?" always came creeping around the corner.

This has especially been a problem in November and the beginning of December, and is why I've run so badly out of steam. These two weeks of holidays have really been great to get away from that. Still, I'm going to have to figure out some better balance to make this sustainable in the long run. I can't be going on holidays every two months or so after all. At this point I don't yet know how exactly to do this, except that I know I need to weave different projects into my schedule somehow. That's something to figure out in the new year.

Tim and I have already been making some good progress discussing the characters, setting, world, and overall story in December, and I'm really eager to dive back into that and get started on planning out the first section of Kandria for the vertical slice. I also have a bunch of cool ideas for new features and effects to implement. I'm looking forward to diving back into all of that next week, but I'm also cautious about all the challenges we already know about. I really don't want to rush it and end up with something we have to throw away in the end.

This entry has gone on for long enough already, even if there's a lot of details and smaller developments I skipped, so I'll try to bring this to a close. As always, if you want to be kept up to date on the development, sign up for the mailing list!

Tim also wanted to write a little bit about his experience working on Kandria the past two months, so here goes:

It's been a whirlwind two months working on Kandria! I've already gotten heavily involved in writing marketing text, developing the lore, and making a demo quest to learn the dev tools. I'm looking forward to coming back after Christmas and keeping the momentum going for the vertical slice. I expect I'll be getting more hands on with the tools in particular, to write multiple quests for a hub-like area; now I've learned the basics and will have more time, I'll be looking to structure it better as well, using the quest system to its fullest, rather than brute-forcing it with task interactions alone. :)

With that, I think I'll call the yearly round-up done. I hope next year will be better than this one, and am currently being cautiously optimistic about that. I wish everyone out there, and especially you reading this, all the best in 2021!


Planet Lisp | 31-Dec-2020 18:31

Alexander Artemenko: cl-api

This is a small and simple documentation builder. It was removed from Quicklisp in 2014 because this project is SBCL only, but I've added it to the Ultralisp and you can test it after upgrade to the latest version.

CL-API is suitable for building a reference for third-party libraries if they don't have their own documentation. But lack of ability to process handwritten chapters and work with package inferred systems, make it unusable for 40ants projects.

As always, I've created a template repository for you.

Here is an example project's documentation built with CL-API:

https://cl-doc-systems.github.io/cl-api/

Use this template if you are making a small library which needs autogenerated API reference.

Also, you'll find a "Pros & Cons" section in the README:

https://github.com/cl-doc-systems/cl-api

Here you will find template projects for other documentation systems.

Choose what is more suite your needs:

https://github.com/cl-doc-systems


Planet Lisp | 26-Dec-2020 20:04

Quicklisp news: December 2020 Quicklisp dist update now available

 New projects

  • aether — A DSL for emulating an actor-based distributed system, housed on a family of emulated devices. — MIT (See LICENSE.md)
  • binding-arrows — An implementation of threading macros based on binding anonymous variables — MIT
  • bitfield — Efficiently represent several finite sets or small integers as a single non-negative integer. — MIT
  • cl-bloggy — A simple extendable blogging system to use with Hunchentoot — MIT
  • cl-data-structures — Data structures, ranges, ranges algorithms. — BSD simplified
  • cl-html-readme — A HTML Documentation Generator for Common Lisp projects. — MIT
  • cl-ini — INI file parser — MIT
  • cl-notebook — A notebook-style in-browser editor for Common Lisp — AGPL3
  • cl-unix-sockets — UNIX Domain socket — Apache License, Version 2.0
  • cmd — A utility for running external programs — MIT
  • cytoscape-clj — A cytoscape widget for Common Lisp Jupyter. — MIT
  • damn-fast-priority-queue — A heap-based priority queue whose first and foremost priority is speed. — MIT
  • dataloader — A universal loader library for various data formats for images/audio — LLGPL
  • ecclesia — Utilities for parsing Lisp code. — MIT
  • fuzzy-match — From a string input and a list of candidates, return the most relevant candidates first. — MIT
  • geco — GECO: Genetic Evolution through Combination of Objects A CLOS-based Framework for Prototyping Genetic Algorithms — GPL 2.0
  • gtwiwtg — Lazy-ish iterators — GPLv3
  • gute — Gene's personal kitchen sink library. — MIT
  • lense — Racket style lenses for the Common Lisp. — BSD-2
  • linear-programming-glpk — A backend for linear-programming using GLPK — GPL 3.0
  • mgrs — Convert coordinates between Latitude/Longitude and MGRS. — GPL-3
  • monomyth — A distributed data processing library for CL — MPL 2.0
  • neural-classifier — Classification of samples based on neural network. — 2-clause BSD
  • roan — A library to support change ringing applications — MIT
  • simple-neural-network — Simple neural network — GPL-3
  • stefil- — Unspecified — Unspecified
  • tree-search — Search recursively through trees of nested lists — ISC
  • ttt — A language for transparent modifications of s-expression based trees. — GPLv3
  • utm-ups — Convert coordinates between Latitude/Longitude and UTM or UPS. — GPL-3
  • with-contexts — The WITH-CONTEXT System. A system providing a WITH macro and 'context'ualized objects handled by a ENTER/HANDLE/EXIT protocol in the spirit of Python's WITH macro. Only better, or, at a minimum different, of course. — BSD

Updated projects: 3bmd, 3bz, 3d-matrices, 3d-vectors, adopt, algae, april, arc-compat, architecture.builder-protocol, array-utils, arrow-macros, aws-sign4, bdef, binpack, check-bnf, cl-ana, cl-ansi-text, cl-bunny, cl-catmull-rom-spline, cl-cffi-gtk, cl-collider, cl-conllu, cl-covid19, cl-custom-hash-table, cl-digraph, cl-environments, cl-gamepad, cl-gd, cl-glfw3, cl-gserver, cl-interpol, cl-kraken, cl-liballegro, cl-liballegro-nuklear, cl-libyaml, cl-lzlib, cl-markless, cl-maxminddb, cl-mime, cl-mixed, cl-mongo-id, cl-naive-store, cl-octet-streams, cl-pass, cl-patterns, cl-pdf, cl-portaudio, cl-prevalence, cl-randist, cl-rdkafka, cl-sdl2, cl-sdl2-mixer, cl-semver, cl-sendgrid, cl-setlocale, cl-skkserv, cl-steamworks, cl-str, cl-tcod, cl-telegram-bot, cl-unicode, cl-utils, cl-wavelets, cl-webkit, cl-yaml, clesh, clj, clml, closer-mop, clsql, clweb, colored, common-lisp-jupyter, concrete-syntax-tree, conduit-packages, consix, corona, croatoan, curry-compose-reader-macros, dartscltools, dartscluuid, data-lens, defclass-std, deploy, dexador, djula, docparser, doplus, easy-audio, easy-routes, eazy-documentation, eclector, esrap, file-select, flexichain, float-features, floating-point-contractions, functional-trees, gadgets, gendl, generic-cl, glacier, golden-utils, gtirb-capstone, harmony, helambdap, house, hunchentoot-multi-acceptor, hyperluminal-mem, imago, ironclad, jingoh, jpeg-turbo, jsonrpc, kekule-clj, linear-programming, linux-packaging, lisp-chat, lisp-critic, lisp-gflags, literate-lisp, lmdb, local-package-aliases, local-time, lquery, markup, math, mcclim, millet, mito, mmap, mutility, named-readtables, neo4cl, nibbles, num-utils, origin, orizuru-orm, parachute, pathname-utils, perceptual-hashes, petalisp, phoe-toolbox, physical-quantities, picl, pjlink, portable-condition-system, postmodern, prometheus.cl, protest, protobuf, py4cl, py4cl2, qt-libs, quilc, quri, rcl, read-number, reader, rpcq, rutils, s-graphviz, sc-extensions, secret-values, sel, select, serapeum, shadow, simple-parallel-tasks, slime, sly, snooze, static-dispatch, stmx, stumpwm, swank-client, swank-protocol, sxql, tesseract-capi, textery, tooter, trace-db, trivial-compress, trivial-do, trivial-pooled-database, trivial-string-template, uax-15, uncursed, verbose, vp-trees, weblocks-examples, weblocks-prototype-js.

Removed projects: cl-arrows, cl-generic-arithmetic, clcs-code, dyna, osmpbf, sanity-clause, unicly.

To get this update, use (ql:update-dist "quicklisp")

Enjoy!


Planet Lisp | 21-Dec-2020 02:27

Micha? Herda: Quicklisp Stats

Quicklisp statistics are now available as CSV files, and the Quicklisp Stats system that I've just submitted to Quicklisp is a little helper library for handling this dataset and accessing it from inside Lisp.

Examples:

;;; How many times was Alexandria downloaded in Nov 2020? QUICKLISP-STATS> (system-downloads :alexandria 2020 11) 13731 ;;; Get all systems that were downloaded ;;; more than 10000 times in Nov 2020 ;;; and print them somewhat nicely QUICKLISP-STATS> (loop with stats = (month 2020 4) with filtered-stats = (remove-if-not (lambda (x) (< 10000 (cdr x))) stats) for (system . count) in filtered-stats do (format t ";; ~20A : ~5D~%" system count)) ;; alexandria : 19938 ;; cl-ppcre : 15636 ;; bordeaux-threads : 14974 ;; trivial-features : 14569 ;; split-sequence : 14510 ;; closer-mop : 14482 ;; trivial-gray-streams : 14259 ;; babel : 14254 ;; cffi : 12365 ;; flexi-streams : 11940 ;; iterate : 11924 ;; named-readtables : 11205 ;; cl-fad : 10996 ;; usocket : 10859 ;; anaphora : 10783 ;; trivial-backtrace : 10693 NIL ;;; How many downloads did Bordeaux Threads ;;; have over all of 2020? QUICKLISP-STATS> (loop for ((year month) . data) in (all) for result = (a:assoc-value data "bordeaux-threads" :test #'equal) do (format t ";; ~4,'0D-~2,'0D: ~D~%" year month result)) ;; 2020-01: 16059 ;; 2020-02: 12701 ;; 2020-03: 17123 ;; 2020-04: 14974 ;; 2020-05: 14489 ;; 2020-06: 13851 ;; 2020-07: 14130 ;; 2020-08: 10843 ;; 2020-09: 13757 ;; 2020-10: 13444 ;; 2020-11: 15825 NIL

Planet Lisp | 20-Dec-2020 19:18

Alexander Artemenko: eazy-documentation

This is yet another documentation generator for Common Lisp, built by Masataro Asai.

Its unique feature is the documentation processor which is able to extract docstring from nonstandard Lisp forms. Also, it supports all markups, supported by Pandoc and can be used to generate documentation from any folder.

You'll find more pros and cons in the template repository I've prepared for you.

Despite many cool features, I have these stoppers for using Eazy Documentation for my own projects:

  • it is hard to control sections ordering;
  • there is no helper for cross-referencing symbols.

MGL-PAX, reviewed recently is still my favourite.

But Eazy Documentation still can be useful when:

  • the system is small;
  • you just have a number of RST/Markdown/other files and want to make a site;
  • you want to build a doc for a third-party library. It can build a doc for any ASDF system.

Planet Lisp | 19-Dec-2020 22:59

sirherrbatka: Manardb
A few remarks about the manardb.
Planet Lisp | 19-Dec-2020 01:00

Alexander Artemenko: cl-gendoc

This is yet another CL documentation generator by Ryan Pavlik. It's interesting features are:

  • Markdown support out of the box.
  • New markups can be easily added.
  • Code from snippets can be linked to CLHS.

As always, I've prepared an example project for you:

https://cl-doc-systems.github.io/cl-gendoc/

You'll find there a full list of cl-gendoc's pros and cons.

Here is a short example of library documentation builder. It includes a few markdown sections and an autogenerated API spec for two packages:

(defun build () (let ((output-filename "docs/build/index.html")) (ensure-directories-exist output-filename) (gendoc:gendoc (:output-filename output-filename :css "simple.css") (:markdown-file "docs/source/index.md") (:markdown-file "docs/source/pros-and-cons.md") (:markdown-file "docs/source/handwritten.md") (:markdown-file "docs/source/reference.md") (:apiref :example/app :example/utils))))

The cl-gendoc has an interesting macro define-gendoc-load-op. It forces documentation build after the asdf:load-system call. Here is how it can be used:

(defsystem example-docs :class :package-inferred-system :defsystem-depends-on ("cl-gendoc") :pathname "docs/scripts/" :depends-on ("example-docs/builder")) (gendoc:define-gendoc-load-op :example-docs :example-docs/builder 'build)

The macro call will expand into:

(progn (defmethod asdf/action:perform :after ((o asdf/lisp-action:load-op) (c (eql (asdf/system:find-system :example-docs)))) (let ((fn (find-symbol (symbol-name 'build) (find-package :example-docs/builder)))) (funcall gendoc::fn))) ;; This method makes ASDF think that load-system wasn't successful ;; and subsequent call will build documentation again. (defmethod asdf/action:operation-done-p ((o asdf/lisp-action:load-op) (c (eql (asdf/system:find-system :example-docs)))) nil))

Personally, I'm don't like this hack. But I'm sure there should be the more correct way to use asdf:make to build docs. If you know how to do this, please, let me know in the comments.


Planet Lisp | 12-Dec-2020 21:59

Marco Antoniotti: With what are we contextualizing?

Common Lisp programmers may write many with-something overt their careers; the language specification itself is ripe with such constructs: witness with-open-file. Many other libraries also introduce a slew of with- macros dealing with this or that case.

So, if this is the case, what prevents Common Lisp programmers from coming up with a generalized with macro?

It appears that the question has been answered, rather satisfactorily, in Python and Julia (at least). Python offers the with statement, alongside a library of "contexts" (Python introduced the with statement in 2005 with PEP 343) and Julia offers its do blocks.

In the following I will present WITH-CONTEXTS, a Common Lisp answer to the question. The library is patterned after the ideas embodied in the Python solution, but with several (common) "lispy" twists.

Here is the standard - underwhelming - example:

(with f = (open "foo.bar") do (do-something-with f ))

That's it as far as syntax is concerned (the 'var =' being optional, obviously not in this example; the syntax was chosen to be loop-like, instead of using Python's as keyword). Things become more interesting when you look under the hood.

Traditional Common Lisp with- macros expand in variations of unwind-protect or handle-case (and friends). The example above, if written with with-open-file would probably expand into something like the following:

(let ((f nil)) (unwind-protect (progn (setq f (open "foo.bar")) (do-something-with f)) (when f (close f))))

Python generalizes this scheme by introducing a enter/exit protocol that is invoked by the with statement. Please refer to the Python documentation on contexts and their __enter__ and __exit__ methods.

The "WITH" Macro in Common Lisp: Contexts and Protocol

In order to introduce a with macro in Common Lisp that mimicked what Python programmers expect and what Common Lisp programmers are used to some twists are necessary. To achieve this goal, a protocol of three generic functions is provided alongside a library of contexts.

The ENTER/HANDLE/EXIT Context Protocol

The WITH-CONTEXTS library provides three generic functions that are called at different times within the code resulting from the expansion of the onvocation of the with macro.

  • enter: this generic function is invoked when the with macro "enters" the context; its main argument is the result of the expression that is the argument of the with macro.
  • handle: this generic function is called to take care of exceptional situations that may arise during the call to enter or during the execution of the body of the with macro.
  • exit: this generic function is called to "clean up" before exiting the context entered by means of the with macro.

Given the protocol (from now on referred to as the "EHE-C protocol"), the (undewhelming) "open file" example expands in the following:

(let ((f nil)) (unwind-protect (progn (setq f (enter (open "contexts.lisp"))) (handler-case (open-stream-p f ) (error (#:ctcx-err-e-41883) (handle f #:ctcx-err-e-41883)))) (exit f )))

Apart from the gensymmed variable the expansion is pretty straightforward. The function enter is called on the newly opened stream (and is essentially an identity function) and sets the variable. If some error happens while the body of the macro is executing then control is passed to the handle function (which, in its most basic form just re-signals the condition). Finally, the unwind-protect has a chance to clean up by calling exit (which, when passed an open stream, just closes it).

One unexpected behavior for Common Lisp programmers is that the varable (f in the case above) escapes the with constructs. This is in line with what Python does, and it may have its uses. The file opening example thus has the following behavior:

CL-prompt > (with f = (open "contexts.lisp") do (open-stream-p f)) T CL-prompt > (open-stream-p f) NIL

To ensure that this behavior is reflected in the implementation, the actual macroexpansion of the with call becomes the following.

(let ((#:ctxt-esc-val-41882 nil)) (multiple-value-prog1 (let ((f nil)) (unwind-protect (progn (setq f (enter (open "contexts.lisp"))) (handler-case (open-stream-p f ) (error (#:ctcx-err-e-41883) (handle f #:ctcx-err-e-41883)))) (multiple-value-prog1 (exit f ) (setf #:ctxt-esc-val-41882 f )))) (setf f #:ctxt-esc-val-41882)))

This "feature" will help in - possibly - porting some Python code to Common Lisp.

"Contexts"

Python attaches to the with statement the notion of contexts. In Common Lisp, far as the with macro is concerned, anything that is passed as the expression to it, must respect the enter/handle/exit. protocol. The three generic functions enter, handle, exit, have simple defaults that essentially let everything "pass through", but specialized context classes have been defined that parallel the Python context library classes.

First of all, the current library defines the EHE-C protocol for streams. This is the strightforward way to obtain the desired behavior for opening and closing files as with with-open-file.

Next, the library defines the following "contexts" (as Python does).

  • null-context:
    this is a full "pass through" context, just encapsulating the expression passed to it.
  • managed-resource-context:
    this is a first cut implementation of a "managed resource", which implements also an acquire/release protocol; of course, this would become more useful in presence of mutltiprocessing (see notes in Limitations).
  • redirect-context:
    this is a context that redirects output to a different stream.
  • suppress-context:
    this is a context that selectively handles some conditions, while ignoring other ones.
  • exit-stack-context:
    this is a context that essentially allows a programmer to manipulate the "state of the computation" within it body and combine other "contexts"; to achieve its design goal, it leverages the functions of a protocol comprising the enter-context, push-context, callback, pop-all and unwind (this is equivalent to the Python close() context method).

This should be a good enough base to start working with contexts in Common Lisp. It is unclear whether the Python decorator interface would provide some extra functionality in this Common Lisp implementation of contexts and the with macro.

Limitations

The current implementation has a semantics that is obviously not the same as the corresponding Python one, but it is hoped that it still provided useful functionality. There are some obvious limitations that should be taken into account.

The current implementation of the library does not take into consideration threading issues. It could, by providing a locking-context based on a portable multiprocessing API (e.g., bordeaux-threads).

The Python implementation of contexts relies heavily on the yield statement. Again, the current implementation does not provide similar functionality, although it could possibly be implemented using a delimited continuation library (e.g., cl-cont).

Disclaimer

The code associated to these documents is not completely tested and it is bound to contain errors and omissions. This documentation may contain errors and omissions as well. Moreover, some design choices are recognized as sub-optimal and may change in the future.

License

The file COPYING that accompanies the library contains a Berkeley-style license. You are advised to use the code at your own risk. No warranty whatsoever is provided, the author will not be held responsible for any effect generated by your use of the library, and you can put here the scariest extra disclaimer you can think of.

Repository and Downloads

The with-contexts library is available on Quicklisp (not yet).

The with-contexts library. is hosted at common-lisp.net.

The git repository can be gotten from the common-lisp.net Gitlab instance in the with-macro project page.

(cheers)


Planet Lisp | 12-Dec-2020 16:03

Neil Munro: New Site
Welcome to my new site!

Nothing impressive here yet, still setting everything up, dunno why I didn't use github pages before now!


Planet Lisp | 09-Dec-2020 22:33

Michael Fiano: Back To Work

Well, I am already slowly starting to get back into coding me some Lisp games. There just isn't much else to do in my free time in this current global health crisis.

For the last week, I have been mostly scribbling notes on my reMarkable about ways to fix the engine troubles discussed in the last couple of articles. I have a few solutions that look really good on paper, so I'm just starting to explore them in code.

While it isn't that difficult of a problem to solve, the difficulty is in retrofitting the existing engine -- it'd be far too much work to solve that, both due to its size and complexity, as well as just the code quality in general with zero unit or integration tests.

For that reason, I am going to begin working on a new engine, that will share a lot of ideas with the previous, but will infact be rewritten from the ground up, with a better architecture and proper tests every step of the way. I'm not going to say much about the new design or what's different until I am confident enough in it, but what I worked out was a way to use structure-objects and arrays in the performance-sensitive areas that were previously using standard-objects and hash tables.

As the project progresses into more than just an idea, I will publish the code on my GitHub as usual. I just wanted to mention that I'm happy to be back, although I am taking precautions as to not get so burnt out again.


Planet Lisp | 07-Dec-2020 17:06

Marco Antoniotti: Iron handling (with Emacs Lisp)

At the beginning of the pandemic I stumbled upon an article regarding the issues that the State of New Jersey was having in issuing relief checks and funding due to the lack of ... COBOL programmers.  At the time I followed a couple of links, landing on this "Hello World on z/OS" blog post.  I was curious and obviously looking for something other than my usual day job, plus, I swear, I had never written some COBOL code.

What follows is a report of the things I learned and how I solved them.  If you are easily bored, just jump to the end of this (long) post to check out the IRON MAIN Emacs Lisp package.

A Foray in the Big Iron Internet

Well, to make a long story short, I eventually installed the Hercules emulator (and other ones - more on this maybe later) in its SDL/Hyperion incarnation and installed the MVS on it; the versions I installed are TK4- and a "Jay Moseley" build (special thanks to Jay, who is one of the most gracious and patient people I interacted with over the Internet).  I also installed other "big iron" OSes, e.g., MTS, on the various emulators and experimented a bit (again, maybe I will report on this later).

It has been a lot of fun, and I discovered a very lively (if grizzled) community of enthusiasts, who mostly gathers around a few groups.io groups, e.g., H390-MVS.  The community is very helpful and, at this point, very similar, IMHO, to the "Lisp" communities out there, if you get my drift.

Anyway, Back to hacking

One way to interact with "the mainframe" (i.e., MVS running on Hercules) is to write your JCL in your host system (Linux, Windows, Mac OS) and then to submit it to a simulated card reader listening over a socket (port 3505, which is meaningful to the IBM mainframe crowd).  JCL code is interesting, as is the overall forma mentis that is required to interact with the mainframe, especially for somebody who was initially taught UNIX, saw some VMS and a few hours of Univac Exec 8. In any case, you can write your JCL, where you can embed whole Assembler, COBOL, Fortran, PL/I etc code, using some editor on Windows, Linux or Mac OS etc.

Of course, Lisp guys know that there is one Editor, with its church. So, what one does is to list-all-packages and install jcl-mo...  Wait...

To the best of my knowledge, as of December 2020, there is no jcl-mode to edit JCL code in Emacs.

It immediately became a categorical imperative to build one, which I did, while learning a bit of Emacs Lisp, that is, all the intricacies of writing modes and eventually post them on MELPA.

Writing the IRON MAIN Emacs Lisp Package

Writing a major mode for Emacs in 2020 is simple in principle, but tricky in practice, especially, if, like me, you start with only a basic knowledge of the system as a user.

One starts with define-derived-mode and, in theory, things should be relatively easy from there on.  The first thing you want to do is to get your font-lock-mode specifications right.  Next you want to add some other nice visual tools to your mode.  Finally you want to package your code to play nice with the Emacs ecosystem.

Font Lock

Font Lock mode (a minor mode) does have some quirks that make it a bit difficult to understand without in depth reading of the manual and of the (sparse) examples one finds over the Internet.  Of course, one never does enough RTFM, but I believe a few key points should be reported here.

Font Lock mode does two "fontification" operations/passes.  At least this seem the way to interpret them.

  1. A search based one: where "keywords" are "searched" and "highlighted" (read: they are rendered according to the face declared for them).
  2. A syntax table one: where fontification is performed based on properties set for a given character in a syntax table.

To interact with Font Lock, a mode must eventually set the variable font-lock-defaults.  The specification of the object contained in this variable is complicated.  This variable is eventually a list with at least one element (the "keywords"); the optional second one controls whether the syntax table pass (2) is performed or not. I found that the interaction between the first two elements must be carefully planned.  Essentially you must decide whether you want only the search based ("keyword") fontification or the syntax table based (2) fontification too.

If you do not want the syntax table based (2) fontification then you want to have the second element of font-lock-defaults set to non-NIL.

The first element of font-lock-defaults is where most of the action is.  Eventually it becomes the value of the variable font-lock-keywords that Font Lock uses to perform search based fontification (1).  The full range of values that font-lock-keywords may assume is quite rich; eventually its structure is just a list of "fontificators". There are two things to note however, which I found very useful.

First, Font Lock applies each element of font-lock-keywords (i.e., (first font-lock-defaults)) in order.  This means that a certain chunk of text may be fontified more than once.  Which brings us to the second bit of useful information.

Each element that eventually ends up in font-lock-keywords may have the form

(matcher . subexp-highlighter)
where subexp-highligther = (subexp facespec [override [laxmatch]])

(see the full documentation for more details).

Fontification is not applied to chunks of text that have already been fontified, unless override is set to non-NIL.  In this case the current fontification is applied.  This is very important for things like strings and comments, which may interact in unexpected ways, unless you are careful with the order of font-lock-keywords.

I suggest you download and use the wonderful library font-lock-studio by Anders Lindgren to debug your Font Lock specifications.

Ruler mode

When you write lines, pardon, cards for MVS or z/OS it is nice to have a ruler to count on that tells you at what column you are (and remember that once you hit column 72 you'd better... continue).  Emacs has a built in nice little utility that does just that: a minor mode named ruler-mode, which shows a ruler in the top row of your buffer.

There is a snag.

Emacs counts columns from 0.  MVS, z/OS and friends count columns from 1.  Popping up the ruler of ruler-mode in a buffer containing JCL (or COBOL, or Fortran) shows that you are "one off": not nice.

Almost luckily, in Emacs 27.x (which is what I am using) you can control this behavior using the variable column-number-indicator-zero-based, which is available when you turn on the minor mode column-number-mode. Its default is t, but if you set it to nil, the columns in the buffer will start at 1, which is "mainframe friendly".  Alas, this change does not percolate (yet - it needs to be fixed in Emacs) to ruler-mode, which insists on counting from 0.

End of story: some - very minor - hacking was needed to fix the rather long "ruler function" to convince it to count columns from 1.

Packaging

Is there a good way to do this?

It appears that most Emacs "packages" are one-file affairs.  The package I wrote needs to be split up in a few files, but it is unclear (remember that I never do enough RTFM) how to keep thinks together for distribution, e.g., on MELPA or, more simply in your Emacs load-path.

What I would like to achieve is to just do a load (or a load-library) of a single file that caused the loading of the other bits and pieces.  It appears that Emacs Lisp does not have an ASDF or a MK:DEFSYSTEM as you have in Common Lisp (I will be glad to be proven wrong), so, as my package is rather small after all, I resorted to writing a main file that is named after the library and which can be thus referenced in the -pkg.el file that Emacs packaging requires.  I could have used use-package, but its intent appear to be dealing with packages that are already "installed" in your Emacs environment.

MELPA comes with it recipes format to register your package; it is a description of your folder structure and it is useful, but it is something you need to submit separately to the main site, let me add, in a rather cumbersome way. Quicklisp is far friendlier.

One other rant I have with the Emacs package distribution sites (e.g., MELPA and El-Get) is that eventually they assume you are on UN*X (Linux) and require you to have installed bits and pieces of the traditional UN*X toolchain (read: make) or worse.  I am running on W10 these days and there must be a better way.

Bottom line: I created a top file (iron-main.el) which just sets up a few things and requires and/or loads the other files that are part of or needed by the package.  One of the files contains the definition of a minor mode called iron-main-mode (in an eponymous .el file).

I am wondering whether this is the best way of doing things in Emacs Lisp.  Please tell me in the comments section.

The IRON MAIN Emacs Lisp Package

At the end of the story, here is the link to the GitHub repository for the IRON MAIN Emacs package to interact with the mainframe.

As you see the package is rather simple.

It is essentially three files plus the "main" one and a few ancillary ones.

  • iron-main.el: the main "loader" file.
  • iron-main-mode.el: the minor mode invoked by the other major modes defined below.
  • jcl-mode.el: a major mode to handle JCL files (pardon, datasets).
  • asmibm-mode.el: a major mode to handle IBM Assemblers.

One of the nice things I was able to include in jcl-mode is the ability to submit the buffer content (or another .jcl file, pardon, dataset) to the mainframe card reader listening on port 3505 (by default, assuming such a card reader has been configured).

This turns out to be useful, because it allows you to avoid using netcat, nc.exe or nc64.exe, which, at elast on W10, always trigger Windows Defender.  Plus everything remains integrated with Emacs.  Remember: there's an Emacs command for that!

To conclude here are two screenshots (one "light", one "dark") of a test JCL included in the release. Submitting it form Emacs to  TK4- and to a "Jay Moseley's build" seems to work pretty well.  Just select the Submit menu under JCL OS or invoke the submit function via M-x.



What's next?  A few things apart from cleaning up, like exploring polymode; after all, embedding code in JCL is not unheard of.

That's it.  It has been fun and I literally learned a lot of new things.  Possibly useful.

If you are a mainframe person, do jump on the Emacs bandwagon.  Hey, you may want to write a ISPF editor emulator for it :)

 

(cheers)

MA


Planet Lisp | 06-Dec-2020 20:10

Nicolas Hafner: Kandria is now on Steam! - December Update


Kandria now has a Steam page! Please visit and wishlist! It would help us a lot to get the game promoted on Steam. As a result of the Steam page and other unexpected changes, this month was pretty hectic, too, so there's a lot to talk about.

Nick - Marketing, Bugfixing, Video Editing, Artworking, Tweaking, Many-things-ing

Very early in the month, we found out that the Pro Helvetia interactive media grant was not going to happen in March of next year like we expected, but rather in September. This was some troubling news, as we had planned our production around that date, and more seriously, I had planned the funding around that as well. It's not like the grant moving would mean Kandria can't be finished - I'm determined to see that through to the end - it's more that with my initial budget allocation the grant timing would have been ideal to keep Fred and Tim on the project continuously if we got the grant.

Now that things have moved around, I'll have to scrounge up some more money to keep them employed on my own dime. It'll be fine, but it was just a bit of a shocking reveal that threw me for a loop. Alongside that revelation though was the announcement of the "Swiss Games Showcase"(https://swissgames.ch/swissgames-showcase/), a newer project that they have which offers mentorship from industry experts for a select number of Swiss game projects. The deadline for application to the showcase was 30th of November, which only gave us a few weeks to scrounge everything together.

The application required a press kit and a pitch video, which we got to work on almost immediately. However, in addition to this I felt it would be best if we still tried to land the new 0.0.4 demo release that had been announced, and got a Steam page published with the new material. The Steam page especially would let us start on gathering wishlists already, and garner some more visibility on another platform.

Publishing a Steam page requires quite a few things though: a trailer video and screenshots, capsule images for the store and the Steam library, and a captivating description text. While Tim started work on the press kit and Fred got going on a first enemy design, I got to work on the Steam artwork:

The style in these drawings is quite different from what I usually do, so it was really challenging work for me. All things considered though, I'm pretty happy with how they turned out! Having proper artwork like this also really helps with promotional material, which is an added bonus for sure.

Next I had to scrounge together a pitch video. The video had to include several points about the game's marketing strategy, financing, and so forth. To help with this I also spent an afternoon doing some market research into similar games on Steam and their general performance. What I found out in doing that is that the combination of platformer and hack & slash is a rather rare one, especially one that includes actual precision platforming, rather than platforming as a necessary byproduct of being side-scrolling. This indicates that we're aiming for a market niche, which is a good thing for smaller indie titles like Kandria.

The video also required narration and gameplay footage, all of which had to be recorded and cut together in a pleasing manner. I'm pretty happy with the end result, though I don't think we can use it for any public marketing material. If you're interested anyway, you can see the video here.

In between all of this there were bugs in the game and especially its tools that needed to be fixed. The tools especially have been giving me some grief. Everything being custom made is nice when it works, not so nice when it doesn't since you know it's all your fault. Being in a rush is also always a good way to make the most annoying bugs surface, because that's just how these things go. I'll probably spend some time intermittently in the next few weeks fixing the most egregious problems in the tooling.

The Swiss Games Showcase application then finally went out last week. We haven't heard back from them yet, but hopefully we should know whether we've been accepted before we go in for the holiday break. Fingers crossed!

After all that I got started on a new tileset for an area we already know is going to be important: the desert. Given the rather eccentric purple look of the tundra area, I thought it would be a good idea to keep that sort of thing up for all the remaining areas as well. Gives the game a more interesting and unique look, for sure.

The tileset barely covers the essentials at the moment, but it's already looking pretty decent, and it's really nice to get a break from the tundra environment I've had to look at for over a year now.

Since we were running out of time I re-used a lot of the footage from the mentorship video and interspersed it with new stuff from the desert environment to craft the trailer for the Steam page. This all only got done in the past week, so I've really been scrambling to get things done!

Finally, leading up to all of this I've also been trying to be more active in promoting the game and posting stuff to my Twitter. I'm not reaching any high numbers or anything yet, but I do think it'll help to do this stuff more often. After all, posting more frequently even with lower shares is still a greater amount of opportunities for other people to see it!

In any case, it's been a bunch of really stressful weeks for me and it has been taking its toll, too. I'm really glad the Steam page is finally out. It feels like a big step forward, but at the same time there's also so, so much work left to be done, it's kind of surreal for me to think about it. I'm really looking forward to being able to wind down a bit in the coming weeks, and especially to being able to take my mind off of things during the holiday break. I heard there was a game coming out soon, what was it again? Cyber... something? Might want to check that out then.

As usual, the new demo release you can get from the mailing list! If you're already subscribed, you should have gotten a reminder email with the download link as well.

Oh, and I just noticed that it's now been a full year of monthly updates! Hoorah.

Fred - Animation Tweaks, Sfx, Enemy Design

Most of the work I did this month has been on designing and implementing the new, first enemy type, adding player animations, and tweaking the combat framing.

There's still a lot left to do there to get the feeling of it right. I think a large part of that is the effects and stuff though, so I also got started on that. I had a lot of fun doing explosions as its an effect I've done a bunch and it's always fun to kinda re-explore it. Next I'm trying figure out the other effects' design style for things like the hard fall, sword slashes, and so on.

Tim - Copywriting and Questing

It's been a beneficial couple of weeks for getting a handle on the game's marketing tone and quest tools. I've worked on the Steam page copy, which went through several drafts with Nick, and which I'm really happy with. I thought I knew about writing Steam pages, but I've learned lots from reading helpful online guides and tips from other devs, as well as studying the pages of games in a similar vein.

Some of this content has been retconned into the presskit too, so it reads its best for the Pro Helvetia application. To that end, I also fed back to Nick on the video script for the app, and I think we've ended up with a really cool summary of what the game is and where it's going.

On the game side I've been having great fun in the level editor. I am now much more confident navigating it, and I even made a new room or "chunk" for the demo quest, mapping the basic layout and painting down the tileset. Ah, maybe one day I'll be an artist... (No I won't).

The quest itself is somewhat of a "my first Kandria quest" scenario, though I'm quite pleased with how it's turned out. I basically took the framework Nick had already scripted in the Markless language, and then changed the structure and content to suit the design I'd planned on paper. It generally fits within the constraints of what was already there, but there's nothing like constraints to get you being creative! I'm finding Lisp quite an unusual syntax to get used to, so banged my head against the wall a little bit, but Nick was there to save the day.

I'm really pleased with the end result though. The characters are showing glimmers of life; I had fun writing The Stranger's scene-investigation lines, as well as snappy back and forths with Fi. The quest even attempts an emotional punch - anyone playing the demo can let me know if that worked for you or not. The toolset is also great for rapid testing and iteration, which is vital for such a non-linear approach to questing as Kandria has. Only once you play do you go "Ahh, that line doesn't make sense anymore if you read that other line first...". So in short: good tools are your friend :)

The Plan

With the Pro Helvetia application out, the Steam page done, and the 0.0.4 demo released we've checked off all the points on our previous roadmap. For the remaining two work weeks of December, we're going to look at planning and conceptualising things. This means we'll work out major story beats, world building, gameplay areas, and side characters. After that there'll be a well-deserved break for two weeks, during which I'll try my best to clean out my head so that I can start fresh into the new year, ready to work on Kandria with a lot more energy.

January, February, and possibly March will be spent working on the vertical slice, so there won't be any further demo updates until that's done. Doing so will give us a lot of insight into the production process - we should have a much better idea of the scope of the game itself, and how much time it takes us to actually produce the necessary content, as well. This will be vital in shaping the future production scheduling. It should also serve as a good testing ground for all the mechanics and base features, our team work, and the testing feedback.

Until then, I hope you'll have a good holiday season, stay safe, and see you again in the new year! Or, if you're on the mailing list, in the next weekly!

If you haven't done so yet, check out our Steam page and wishlist Kandria! It would really help us out a lot.


Planet Lisp | 06-Dec-2020 18:58

Jonathan Godbout: Mortgage Server on a Raspberry Pi

In the last post we discussed creating a server to calculate an amortization schedule that takes and returns both protocol buffer messages and JSON. In this post we will discuss hosting this server on a Raspberry Pi. There are some pitfalls, and the story isn't complete, but it's still fairly compelling.

What We Will Use:

Hardware:

We will use a Raspberry Pi 3 model B as our server. We will use the stock operating system Raspbian. This SOC has a quad core 64-bit processor with floating point on chip. The operating system itself is 32-bit which makes the processor run on 32-bit mode.

Software:

We will be using SBCL as our Common Lisp, CL-PROTOBUFS as our protocol buffer and JSON library, and Hunchentoot as our web server.

Problems 1. SBCL on a Raspbian

When trying to run the mortgage-info server on Raspbian the first error I got was an inability to load the lisp file generated by protoc. On contacting Doug Katzman he noted I was running an old version of SBCL. The Raspbian apt-get repository has an old version of SBCL. If someone desires to run SBCL on a Raspberry Pi they should follow the binary installation instructions here: http://www.sbcl.org/getting.html.

2. CL-Protobufs on a 32-Bit OS

The cl-protobufs library has been optimized to run on a 64-bit x86 platform. The Raspberry Pi environment is 32-bit arm. As noted before, the 32-bit arm environment is supported by SBCL. I don't think anyone has attempted to run cl-protobufs on the 32-bit arm environment running SBCL. After modifying cl-protobufs.asd to have float-bits.lisp loaded on SBCL not running in 64-bit we could quickload mortgage-info into a repl.

3. Bugs in the mortgage-info repo  

There were several bugs I fixed in my very limited testing of the mortgage info repo, as well as some bugs that are still existent. 

  1. When trying to set numbers in the proto message structs I had to coerce them to double-float. I'm not sure why… This works on SBCL running on the x86-64 without the coercions.
  2. A division by 0 bug if the entered interest rate is 0.
  3. The possibility of having 0 as the number of repayment periods. I added an assertion so we will return a 500 stating the assertion was hit. We should have a more graceful error message than a stack trace, but this is currently only a proof of concept.
  4. The mortgage.proto file had interest as an integer, but interest is usually a float divisible by .125. 
  5. We have rounding problems if the interest rate is too high (say 99%). We only ever pay interest and the amount never goes down, at least with a 300 payment period. This is most likely due to rounding, we do not accept fractional pennies. This is okay, if the national interest rate went anywhere near 99% we have BIG problems.
CL-protobufs on the Pi

I have cl-protobufs running on SBCL on the Raspberry Pi, but some of the tests don't pass. I'm not sure if it would work on a 64-bit OS on the Raspberry Pi, I don't have the inclination to get a 64-bit OS for my Pi. If you do, please tell me what happens!

I wasn't able to get CCL on arm32 to load cl-protobufs. It gives an error saying it doesn't have asdf 3.1. Quickloading asdf I get undefined function version

Trying to run ABCL lead me to yet another bug: https://github.com/armedbear/abcl/issues/359

Running Server

My Raspberry Pi is running at: http://65.96.161.53:4242/mortgage-info

Feel free to send either JSON or protobuf messages to the server.

Example JSON:

{
“interest”:3,
“loan_amount”:380000,
“num_periods”:300
}

I don't know how long I will keep it running. If it goes down and you are interested in sending it messages please send me an email.

Ron, Carl, and Ben edited this post (as usual). Doug provided a great deal of help with SBCL on ARM 32.


Planet Lisp | 05-Dec-2020 22:25

Alexander Artemenko: codex

This is a documentation generator by Fernando Boretti.

It is built for extensibility and uses CommonDoc as an internal format for document representation and manipulation.

Another interesting feature is a custom markup language Scriba (reviewed in post #0178). Actually, Codex supports any number of markup languages, you just need to write a parser from this markup into CommonDoc.

The sad part is that there is no support for cross-referencing and there are a number of bugs :(

As always, I've created a template project for you, demonstrating how to use Codex to document a simple Common Lisp library:

https://github.com/cl-doc-systems/codex

Read about other pros and cons on this page:

https://cl-doc-systems.github.io/codex/pros-&-cons-of-codex.html


Planet Lisp | 03-Dec-2020 20:02

Michael Fiano: Follow-up to Gamedev, Sleep, Repeat

After the last hastily constructed stream-of-consciousness post, I feel like I didn't explain some things very well.

I mentioned that I have been failing for about 10 years. This isn't completely accurate, as I have both learned a lot, and have been able to re-use that knowledge and a lot of mathematics and code in future attempts. Writing a game engine as part of a small team is difficult, and this is expected.

I have been writing games and game engines for 25 years. Why? Because it's fun, and an endless journey of knowledge. I am less interested in making games, and more interested in the design of game engines. A game engine is interesting to me because it requires discipline in many fields of study, and each implementation is different. The thing is, a game engine is a piece of software that manages the data flow for a particular game, or a particular category of games. It is nothing more than a set of choices someone made for you in order to write games in a particular way. Any given game engine could be productive or counter-productive in creating your game. Even using a general purpose game engine like Unity and Unreal is a trade-off, and for a significant game, you'll find you still have to work around or reimplement core engine features at the 11th hour to get your game shipped.

I mentioned I work in a small team writing game engine code. Yes, there are three other developers working with me to write a game engine in Common Lisp, which is a different project than the engine mentioned in the previous post, and serves different game developer needs. Half of the people in this team are currently part of a games studio that professionally use Unity and have released real games, both as part of the studio, and individually, and the reason why they want to make an engine in Common Lisp, is because of the numerous shortcomings of that engine. Even with millions of dollars, and an endless army of developers, a particular game engine still may not work for you, and could be more of a hindrance than starting from scratch, especially if you already have experience in the black arts.

As mentioned, I do enjoy working out the math and architectural decisions involved with a complicated piece of software such as a game engine, far more than making games. It is why I have made several (perhaps a dozen) game engines in Common Lisp -- that's what I find fun. Ocassionally, I get a good idea for a game, and I stop to try using one of these to execute that idea. This latest attempt was trying to use an engine designed for one particular type of game for another, so it is no wonder it wasn't suited for the performance (and some feature) characteristics required.

Even if one has an engine particularly suited for the type of game they want to make, making games is hard, and requires lots of discipline in many different fields, not just maths and computer science. Content is king, and asset creation accounts for a lot of the work, in addition to all the game logic and making it all well-balanced. This can only come after a seemingly never-ending tweak, play-test, tweak feedback loop in most situations, for a moderately sized game. This large cost in writing a game is one reason why lots of people reach for a ready-made engine instead of doing things themselves, and there is no harm in that.

While there are some promising engines and tools which can be built upon for Common Lisp, they all have not been battle-tested, or are otherwise not very usable out-of-box for a sizable game idea. This has led me and a few others to try changing that, slowly but surely. Common Lisp is an excellent platform for a game engine, despite what some may think. Common Lisp can be extremely performant, but ultimate runtime performance is not usually required for games these days. Good game design is about finding a balance between the CPU and GPU, and with concurrency and the very little work most games have to do on the CPU relative to that of the GPU, unless there is a complex, non-discrete physics system involved, or tens of thousands of nodes in your scene, it really isn't a problem. If it ever is, you can shift around work between the two processors in a lot of cases.

Where Common Lisp really shines for making games, is in the interactivity the whole language is designed around. Generic functions, while much slower than regular functions, is a cost we're usually willing to pay. Macros, and designing all of the DSLs a game requires for describing data, is dead-simple in Common Lisp. Hot code reloading, is one of my favorite features. Being able to recompile individual functions, DSLs, etc, as a game is running, without requiring custom support from a game engine editor, is the biggest time-saver for me.

For example, I could write a DSL to describe an entity along with all of its properties, and any children and their properties, and so on. Then I could create an instance of this sub-tree and stick it somewhere in the game world. Maybe I'll add 100 instances, each at different locations. Then I could go back to the DSL and decide that I want them all to have an additional child node, so I add it, hit a button, and just like that they're all updated in the game world. Similarly, if one of their textures doesn't look quite right, I could recompile another DSL that describes the texture, to have all uses of it updated in the game world. This workflow is very welcome after coming from a language that forces you to stop the game, recompile everything, restart the game, and get back to the place you were in the game. After all that is said and done, it is very difficult to know if your changes made an impact for the better - often times you are making small color adjustments, or otherwise hard to notice, but better nonetheless, shader program adjustments.

These are some of the reasons why I love Common Lisp, and why I love making game engines and games in Common Lisp. Just because I got burnt out for a bit, doesn't mean I'm done or have given up. This is a lifelong journey of mine, because I find it fun and a pleasure to work with such a dynamic, interactive, and fast when it needs to be language.

People like Nicolas Hafner (Shinmera), Chris Bagley (Baggers), and Pavel Korolev (borodust) are inspirations, who have also devoted themselves to game development in Common Lisp, with great success. I wish them all the best of luck, and my thanks for giving me the will to continue after so many years. I would also like to sincerely thank all of the people who have been supportive over the years, and special thanks to those few of you who have sponsored me. Thank you, everyone!


Planet Lisp | 02-Dec-2020 09:24

Jonathan Godbout: Lisp Mortgage Calculator Proto with JSON

I've finally found a house! Like many Googlers from Cambridge I will be moving to Belmont MA. With that being said, I have to get a mortgage. My wife noticed we don't know much about mortgages, so she decided to do some research. I, being a mathematician and a programmer, decided to make a basic mortgage calculator that will tell you how much you will pay on your mortgage per month, and give you an approximate amortization schedule. Due to rounding it's impossible to give an exact amortization schedule for every bank.

This post should explain three things:

  1. How to calculate your monthly payment given a fixed rate loan.
  2. How to create an amortization schedule.
  3. How to create an easy acceptor in Hunchentoot that takes either application/json or application/octet-stream.
Mathematical Finance

The actual formulas here come from the Pre Calculus for Economic Students course my wife teaches. The book is:

Applied Mathematics for the Managerial, Life, and Social Sciences, Soo T. Tan, Cengage Learning, Jan 1, 2015 – Mathematics – 1024 pages

With that out of the way we come to the Periodic Payment formula. We will assume you pay monthly and the interest rate is quoted for the year but calculated monthly. 

Example: Interest rate of 3% Loan Amount 100,000$ First Month Interest = $100,000*(.03/12) = $100,000*.0025= $250.

 

I am not going to prove this, though the proof is not hard. I refer to the cited book section 4.3.

With this we can compute the amortization schedule iteratively. The interest paid for the first month is:

The payment toward principal for the first month is:

The interest paid for month j is:

The payment toward principal for month j is:

Since relies on only the for and is defined, we can compute them for any value we wish!

Creating the Mortgage Calculator

We will be creating a Huntchentoot server that will receive either JSON or octet-stream Protocol Buffer messages and return either JSON or octet-stream Protocol Buffer messages. My previous posts discussed creating Hunchentoot Acceptors and integrating Protocol Buffer Messages into a Lisp application. For a refresher please visit my Proto Over HTTPS.

mortgage.proto

When defining a system that sends and receives protocol buffers you must tell your consumers what those messages will be. We expect requests to be in the form of the  mortgage_information_request message and we will respond with mortgage_information message.

Note: With the cl-protobufs.json package we can send JSON requests that look like the protocol buffer message. So sending in:

{ "interest":"3", "loan_amount":"380000", "num_periods":"300" }

We can parse a mortgage_information. We will show how to do this shortly.

mortgage-info.lisp Server Code:

There are two main portions of this file, the server creation section and the mortgage calculator section. We will start by discussing the server creation section by looking at the define-easy-handler macro.

We get the post body by calling (raw-post-data). This can either be in JSON or serialized protocol buffer format so we inspect the content-type http header with 

(cdr (assoc :content-type (headers-in *request*)))

If this header is “application/json” we turn the body into a string and call cl-protobufs.json:parse-json:

(let ((string-request (flexi-streams:octets-to-string request))) (cl-protobufs.json:parse-json 'mf:mortgage-information-request :stream (make-string-input-stream string-request)))

Otherwise we assume it's a serialized protocol buffer message and we call cl-protobufs:deserialize-from-stream.

The application code is the same either way; we will briefly discuss this later.

Finally, if we received a JSON object we return a JSON object. This can be done by calling cl-protobufs.json:print-json on the response object:

(setf (hunchentoot:content-type*) "application/json") (let ((out-stream (make-string-output-stream))) (cl-protobufs.json:print-json response :stream out-stream) (get-output-stream-string out-stream))

Otherwise we return the response serialized to an octet vector using cl-protobufs:serialize-to-bytes.

Application Code:

For the most part, the application code is just the formulas described in the mathematical finance section but written in Lisp. The only problem is that representing currency as double-precision floating point is terrible. We make two simplifying assumptions:

  1. The currency uses two digits after the decimal.
  2. We floor to two digits after the decimal.

When we make our final amortization line we pay off the remaining principal. This means the repayment may not be the repayment amount for every other month, but it removes rounding errors. We may want to make a currency message for users to send us which specifies its own rounding and decimal places, or we could use the Google one that is not a well known type here. The ins-and-outs of currency programming wasn't part of this blog post so please pardon the crudeness.

We create the mortgage_info message with the call to populate-mortgage-info:

  (let (...          (response (populate-mortgage-info                     (mf:loan-amount request)                     (mf:interest request)                     (mf:num-periods request)))) ...)

We showed in the previous section how we convert JSON text or the serialized protocol buffer message into a protocol buffer message in lisp memory. This message was stored in the request variable. We also showed in the last section how the response variable will be returned to the caller as either a JSON string or a serialized protocol buffer message.

The author would like to thanks Ron Gut, Carl Gay, and Ben Kuehnert.


Planet Lisp | 01-Dec-2020 21:54

Alexander Artemenko: mgl-pax

This is a very cool documentation generator for Common Lisp projects.

The most interesting features are:

  • Emacs/SLIME integration.
  • Ability to generate Markdown.
  • API to add new entity types.
  • Linking to the sources on the GitHub.
  • Docstrings deindentation.
  • Generation docs for multiple ASDF systems with cross-referencing.
  • Auto-export of documented symbols.

Some cons:

  • The recommended way to mix documentation section with code leads to the runtime dependency from PAX and all it's dependencies. But you might define documentation as a separate ASDF system.
  • It is inconvenient to write Markdown in docstrings. Is there any way to teach Emacs to use markdown minor mode for documentation strings?

There are more pros and cons. All of them are listed in the example project. It is up and ready to be cloned. Use it as a template for your own Common Lisp library with great documentation!

You'll find all such templates in this GitHub organization:

https://github.com/cl-doc-systems

I have plans to review a few other documentation builders, but MGL-PAX is my favourite so far.


Planet Lisp | 29-Nov-2020 22:43

Michael Fiano: Gamedev, Sleep, Repeat

It's been several years since I last posted. There are several reasons, but most notable, is the fact that I haven't been doing anything except write a game engine from scratch. For nearly 2 years, I would just crank out code, sleep, and repeat.

The good news is I was able to write a game engine usable (to an extent; more on that later) for the game ideas I had in mind. The bad news is, as previously mentioned, it has took its toll on my mental health, knowing that I lost a lot of time where I could have been working on other projects on my back burner, or just having fun with random activities, such as playing games, going on a hike, etc.

About three months ago, I finished polishing up the engine and started planning and implementing the beginnings of my first game - after more than 10 years of failing -- which was mostly due to the general lack of good game development tooling and engines in general for Common Lisp. Things were looking good about two months into development and several thousand lines of game logic later. However, shortly thereafter, as my game was a real-time game with lots of game objects and physics being calculated each frame, I quickly realized that the engine was not performant enough to pull of my game idea. After a week of profiling, improving suspectful things in the engine, and reiterating, I didn't improve the performance much at all, and was finally stuck with SBCL's statistical profiler telling me that even for a small scene with not too much going on, my CPU was spending about 50% of time in CLOS -- Common Lisp's object system, which is very dynamic and relies on a lot of runtime dispatch accessing slot values, calling generic function accessors, and so forth.

This was a pretty large disappointment, because I didn't anticipate it to be this slow, even though I knew it was doing a lot of dynamic diapatching. The use of CLOS was a fundamental design decision I made on day one, which utilizes the MOP (Meta-Object Protocol) in order to dynamically generate classes at runtime as behavioral components are added or removed from game objects. Everything being a class meant that there was a lot of dynamic dispatch, for accessing slots of objects which in turn hold hold references to other objects, etc.

After a lot of thought about what to do, I ultimately decided that it would be best to rewrite the foundation of the engine using actual composition over inheritance rather than mixin classes. This meant completely decoupling components from game objects, and using static structure objects rather than CLOS standard objects.

This decoupling of components meant that another core piece of machinery also had to be rewritten -- the component flow protocol, which is responsible for realizing game objects and their components, and ensuring everything happens in lock-step throughout a frame. This is actually a much harder problem than it sounds like, considering one of the core design ideas was a declarative, DSL for many types of game resources, with a notable prefab DSL for describing a subtree of game objects and components. Arbitrary nodes within a prefab description can reference or be referenced by other toplevel prefab definitions, and each toplevel form is able to be live-recompiled as the game is running to see changes happen in real time. This decoupling of components from entities ruined this ability of interactivity in many ways, and there just is not a clear solution to the problem. At the very least, it would require going back to the drawing board for several weeks, and redesigning the engine with simplicity in mind.

Which brings me to my main point. Game engines are large systems consisting of many moving parts. Good software engineering requires simplicity -- it is what allows a system to remain secure, stable, and coherent throughout its evolution. Simplicity itself requires a lot of work at the start of a project to reduce the idea to its essense, and lots of discipline over the lifetime of the project to be able to distinguish worthwhile changes from the pernicious ones. That is simply everything my game engine is not, because for such a complex piece of software such as a game engine, it is not easy to know HOW all the pieces fit together, just some vague idea. Complexity arises through the iterative process that is implementing and actually debugging problems with these features. Making a small change to get a engine feature to play nice with others could, and often does, adversely affects simplicity and elegance much later down the road during development.

The refactored engine with structs over classes, and decoupling of components from game objects, is for the most part a failure, and I am abandoning that two week effort. That leaves me with the previous, albeit slower performing attempt. It probably means that I have to either scrap my current game idea, or change it in major ways to be able to pull it off so that it is playable. It's either that, or just start over yet again, engine and all, in which case I would start to question my choice of language. Common Lisp seems like an excellent choice for interactive applications that require hot code reloading, such as with games, but with games, requires very good performance over convenience and simplicity in a lot of areas.

I am honestly not sure what I will do yet, but I do know, that for the first time in about 2 years, I am going to take a much needed break to let all of this sink into my subconscious, and maybe the way forward will emerge. I will play games, go on hikes, read books, work on other projects that have been sitting on my back burner for far too long, and maybe even take up another programming language in the mean time. I just know that I need a serious break from all of this, as the mental toll it all has taken on me is real. Sometimes I feel like I'm worthless, not a good programmer, etc, all because game development, and especially game engine development, is a lifelong journey requiring discipline and knowledge in many different fields of study.

That is all, and sorry for the rant. This post was not proof-read. I just needed to quickly get this out of my head to begin my hiatus.


Planet Lisp | 29-Nov-2020 11:35

Zach Beane: Jackson Lee Underwriting has a remote Common Lisp job open

Check it out!


Planet Lisp | 26-Nov-2020 18:45

Vsevolod Dyomkin: The Common Lisp Condition System Book

Several months ago I had a pleasure to be one of the reviewers of the book The Common Lisp Condition System (Beyond Exception Handling with Control Flow Mechanisms) by Michał Herda. I doubt that I have contributed much to the book, but, at least, I can express my appreciation in the form of a reader review here.

My overall impression is that the book is very well-written and definitely worth reading. I always considered special variables, the condition system, and multiple returns values to be the most underappreciated features of Common Lisp, although I have never imagined that a whole book may be written on these topics (and even just two of them). So, I was pleasantly flabbergasted.

The book has a lot of things I value in good technical writing: a structured and logical exposition, detailed discussions of various nuances, a subtle sense of humor, and lots of Lisp. I should say that reading the stories of Tom, Kate, and Mark was so entertaining that I wished to learn more about their lives. I even daydreamt (to use the term often seen throughout the book) about a new semi-fiction genre: stories about people who behave like computer programs. I guess a book of short stories containing the two from this book and the story of Mac from "Practical Common Lisp" can already be initialized. "Anthropomorphic Lisp Tales"...

So, I can definitely recommend reading CLCS to anyone interested in expanding their Lisp knowledge and general understanding of programming concepts. And although I can call myself quite well versed with the CL condition system, I was also able to learn several new tricks and enrich my understanding. Actually, that is quite valuable as you never know when one of its features could become handy to save your programming day. In my own Lisp career, I had several such a-ha moments and continue appreciating them.

This book should also be relevant to those, who have a general understanding of Lisp, but are compelled to spend their careers programming in inferior languages: you can learn more about one of the foundations of interactive programming and appreciate its value. Perhaps, one day you'll have access to programming environments that focus on this dimension or you'll be able to add elements of interactivity to your own workflow.

As for those who are not familiar with Lisp, I'd first start with the classic Practical Common Lisp.

So, thanks to Michał for another great addition to my virtual Lisp books collection. The spice mush flow, as they say...


Planet Lisp | 23-Nov-2020 13:41

Micha? Herda: Damn Fast Priority Queue: a speed-oriented priority queue implementation

I think I have accidentally outperformed all of the Quicklisp priority queue implementations. Enter Damn Fast Priority Queue.

Detailed description and benchmarks are available on the GitHub repository. It seems that my implementation is consistently an order of magnitude faster than most of the other priority heaps (with Pileup being the runner-up, only being about 3-4x slower than DFPQ).


Planet Lisp | 16-Nov-2020 23:18

Micha? Herda: Cafe Latte - a condition system in Java

I've more or less finished Cafe Latte - an implementation of Common Lisp dynamic variables, control flow operators, and condition system in plain Java.

It started out as a proof that a condition system can be implemented even on top of a language that has only automatic memory management and a primitive unwinding operator (throw), but does not have dynamic variables or non-local returns by default.

It should be possible to use it, or parts of it, in other projects, and its source code should be readable enough to understand the underlying mechanics of each Lisp control flow operator.


Planet Lisp | 15-Nov-2020 23:26

Alexander Artemenko: staple

Today we'll look at @Shinmera's documentation system called Staple.

As always, I've created the skeleton project for you.

But I must warn you. This skeleton is almost unusable, because of a few problems.

The first problem is that I wasn't able to make markdown work for the docstrings. When I do setup like suggests Staple documentation, it does not see all package of my system.

The second problem is that for some reason it works badly with my simple example system. It shows on the index page multiple equal links to the Example system.

Probably both problems are related to the ASDF system type. I'm using package inferred systems.

Also, the Staple's error messages a complete mess. It is not possible to figure out the origin of a warning. The output looks like that:

Scan was called 76 times. Warning: could not find hyperspec map file. Adjust the path at the top of clhs-lookup.lisp to get links to the HyperSpec. Scan was called 384 times. Scan was called 116 times. Scan was called 287 times. Scan was called 94 times. Scan was called 2005 times. Scan was called 162 times. Scan was called 162 times. Scan was called 186 times. Scan was called 397 times. Scan was called 221 times. Scan was called 502 times. Scan was called 134 times. Scan was called 130 times. Scan was called 395 times. Scan was called 1453 times. WARN: Error during code markup: Invalid function name: STR:DOWNCASE WARN: Error during code markup: Symbol name must not end with a package marker (the : character). Scan was called 2782 times. WARN: Error during code markup: Unquote not inside backquote. Scan was called 95 times. Scan was called 2019 times. Scan was called 4518 times. Scan was called 2670 times. Scan was called 2209 times. WARN: Error during code markup: A token consisting solely of multiple dots is illegal. WARN: Error during code markup: Invalid function name: DOCPARSER:PARSE WARN: Error during code markup: The character < is not a valid sub-character for the # dispatch macro. WARN: Error during code markup: The character < is not a valid sub-character for the # dispatch macro. WARN: Error during code markup: Symbol name must not end with a package marker (the : character). WARN: Error during code markup: Invalid function name: 4 WARN: Error during code markup: Invalid function name: STAPLE-SERVER:START WARN: Error during code markup: Invalid function name: STAPLE-SERVER:STOP WARN: Error during code markup: Invalid function name: LQUERY-TEST:RUN WARN: Error during code markup: Invalid function name: 1 WARN: Error during code markup: Symbol name without any escapes must not consist solely of package markers (: characters). WARN: Error during code markup: The value NIL is not of type CHARACTER WARN: Error during code markup: The value NIL is not of type CHARACTER WARN: Error during code markup: While reading backquote, expected an object when input ended. WARN: Error during code markup: While reading backquote, expected an object when input ended. WARN: Error during code markup: The value NIL is not of type CHARACTER WARN: Error during code markup: Symbol name without any escapes must not consist solely of package markers (: characters). WARN: Error during code markup: If a symbol token contains two package markers, they must be adjacent as in package::symbol. WARN: Error during code markup: The value NIL is not of type CHARACTER WARN: Error during code markup: While reading symbol, expected the character | when input ended.

Another hard thing is customization. Seems Staple was built for maximum flexibility and sometimes it is not obvious how to do such simple thing like adding yet another markdown page to the output.

Now to the good news :)

As I said Staple is very flexible. It uses CLOS and generic functions almost everywhere and you can extend it in a number ways. For example, you can add support for your own markup language.

There is also a unique feature: Staple automatically links every function documentation to its source on the GitHub:

Documentation for @Shinmera's libraries looks really good. Take look to the Radiance, for example:

https://shirakumo.github.io/radiance/

Probably Staple just needs more polishing to become a great documentation system for Common Lisp libraries?

For now Coo is my favourite. But in the next week or two, we'll look at a few other Common Lisp documentation systems.

Seems documentation building works quite unstable and sometimes does not build some pages where is running on the CI :(


Planet Lisp | 09-Nov-2020 23:10

Nicolas Hafner: Closing in on Production - November Kandria Update


October somehow flew by really quickly for me. It's already November, and we're nearing the end of the year, too. Just thinking about that is making me reminiscent, but I'll have to hold off on doing my yearly wrap-up for another two months! Who knows, a lot more can still happen in that time. Last month marked another release for Kandria, and this month marked the start of Kandria being an actual team effort!

I'm really glad that it's no longer just me working on things. Fred already introduced himself in the last monthly, and by now he has already started work and delivered some really great stuff:

As a result, the game already feels a lot more fun to play. The step up from the combat animations I had made early in the year is huge!

We're still not done with it though, there's a few more moves missing, and a lot more left to adjust and fine-tune of course. We'll also have to get started on some real enemy designs soon and implement those to have some interesting encounters to test things with.

I can now also finally announce the third team member, Tim White, who'll be working on characters, story, and dialogue for the game:

Hey there! I'm Tim, a games writer from the UK. I've been in the industry for ten years now (where did the time go?!), and have been lucky enough to work at Jagex on Transformers Universe, and most recently with Brightrock Games on War for the Overworld and an unannounced game.

Kandria jumped out the screen at me straight away, with its detailed world and story, custom-made dev tools, and strong creative and artistic direction. I also have a real soft spot for post-apocalyptic worlds, and the ethics surrounding artificial life. Applying was a no brainer, and I can't wait to start!

You can find Tim on Twitter at @TimAlanWhite, or on the official Kandria Discord.

Both Tim and Fred will be giving quick updates on what's happening in the weekly newsletter from now on. The newsletter has now also been moved away from Mailchimp to my own mailing list service called Courier. I'm glad to finally have made the switch, freeing me from Mailchimp's slow and clunky interface!

On the engine side, I reworked the lighting and background systems to allow changing the lighting and parallax background to fit the current environment. As part of this I also changed the shadow casting to work properly so that it no longer contains the weird corner case glitches it used to.

I also had to make some fixes to the animation system to make it more capable and to make it less of a hassle to use when animations are changed or added. Previously the tooling there would easily mess up your data.

Then, in order to prepare for Tim, I reworked the quest system to be much easier to manage and control, and added a couple of additional features that should be very useful to control branching. To test it I made some quick draft animations for Fi and jotted her down in the test level.

She'll now comment on things you can find throughout the level.

I also wrote a bunch of documentation to help Tim and Fred get set up and running with the game, introduced some very useful tooling like hot-reloading to make it faster to iterate on animations and textures, and improved the editor, especially for the in-game animation properties.

With all of this now in, we are very, very close to ending post-production. There's a few not-so-small things that I still need to do, like an animation system for the UI that I started working on yesterday, and one very nasty bug that popped up on Windows systems with surround sound configured. Still, with all of this in mind, I think we're well on track for the vertical slice release in March.

I hope there'll be a 0.0.4 demo release by the end of this month, which will be the last public demo until the vertical slice 0.1.0 demo. After that- I don't know yet how things will go. A lot about the game is going to become much clearer in the coming months as we decide on stuff like the core plot and work out the first area of the game for the vertical slice.

Aside from putting out whatever fires Fred and Tim stumble across this month, I'll be focusing on two things: first, fix surround sound on Windows. This is important to me as having the game crash and burn because of something so... tangential, is really terrible. Second, implement a UI animation system. The UI toolkit I'm using, Alloy, does not currently have a way to animate things. This is fine for tools and other UI like that, but in games you really want to spruce things up by tweening and animating to make your UI more interesting to look at. That's the last major addition to Alloy that's needed to have everything we need.

If time permits, I'll also work on some more platforming challenge levels to give the 0.0.4 demo some more content.

Anyway, I'm really happy to have a team together now, and I'm very excited to see how quickly things develop from here! To be fair, I'm also quite a bit worried what with being, I suppose, my own boss now, and the responsibilities that brings. I suppose time will tell whether I can figure out a good schedule and manage things well. For now I'm cautiously optimistic.

Alright, back to thinking about the animation system now, and see you next month, or next week if you're on the mailing list!


Planet Lisp | 07-Nov-2020 15:01

Alexander Artemenko: sphinxcontrib-cldomain

This is an add-on to the Sphinx documentation system which allows using of information about Common Lisp packages in the documentation.

Initially, Sphinx was created for Python's documentation and now it is widely used not only for python libraries but also for many other languages.

Sphinx uses reStructured text markup language which is extensible. You can write your own extensions in Python to introduce new building blocks, called "roles".

sphinxcontrib-cldomain consists of two parts. The first part is a python extension to the Sphinx which adds an ability to render documentation for CL functions, methods and classes. The second - a command-line docstring extractor, written in CL.

Initially, cldomain was created by Russell Sim, but at some moment I've forked the repository to port it to the newer Sphinx, Python3 and Roswell.

The coolest feature of the cldomain is its ability to mix handwritten documentation with docstring. The second - cross-referencing. You can link between different docstrings and chapters of the documentation.

Today I will not show you any code snippets. Instead, I've created an example repository with a simple Common Lisp system and documentation:

https://cl-doc-systems.github.io/sphinxcontrib-cldomain/

This example includes a GitHub workflow to update the documentation on a push to the main branch and can be used as a skeleton for you own libraries.

The main thing I dislike in Sphinx and cldomain is the Python :( Other cons are the complexity of the markup and toolchain setup.

In the next few posts, I'll review a few other documentation tools for Common Lisp and try to figure out if they can replace Sphinx for me.

I think we as CL community must concentrate our efforts to improve the documentation level of our software and choosing the best setup which can be recommended for everybody is the key.


Planet Lisp | 01-Nov-2020 01:43

ABCL Dev: ABCL 1.8.0

Under the gathering storms of the Fall 2020, we are pleased to release ABCL 1.8.0 as the Ninth major revision of the implementation.

This Ninth Edition of the implementation now supports building and running on the recently released openjdk15 platform.  This release is intended as the last major release to support the openjdk6 openjdk7, and openjdk8 platforms, for with abcl-2.0.0 we intend to move the minimum platform to openjdk11 or better in order to efficiently implement atomic memory compare and swap operations.

With this release, the implementation of the EXT:JAR-PATHNAME and EXT:URL-PATHNAME subtypes of cl:PATHNAME has been overhauled to the point that arbitrary references to ZIP archives within archives now work for read-only stream operations (CL:PROBE-FILE CL:TRUENAME, CL:OPEN, CL:LOAD, CL:FILE-WRITE-DATE, CL:DIRECTORY, and CL:MERGE-PATHNAMES).  The previous versions of the implementation relied on the ability for java.net.URL to open streams of an archive within an archive, behavior that was silently dropped after Java 5, and consequently hasn't worked on common platforms supported by the Bear in a long time.  The overhaul of the implementation restores the feasibility of accessing fasls from within jar files.  Interested parties may examine the ASDF-JAR contrib for a recipe for packaging and accessing such artifacts.  Please consult the "Beyond ANSI: Pathnames" Section 4.2 of the User Manual for further details for how namestrings and components of PATHNAME objects have been revised.

A more comprehensive list of CHANGES is available with the source.




    



Planet Lisp | 30-Oct-2020 12:34

Alexander Artemenko: cl-pdf

This is the library for PDF generation and parsing.

Today I'm too lazy to provided step by step examples, but I have a real task to do with this library.

Some time ago I've read the article about productivity which recommended to print a "life calendar". This calendar should remind you: "The life is limited and the time's price is high."

The calendar is a grid where every box is one week of you life. The article suggested to buy a poster with the calendar, but I don't want to wait for a parcel with the poster! I want to print it now!

And here is where cl-pdf comes on the scene!

I wrote this simple function to generate the poster of A1 format:

(defun render (&optional (filename "life.pdf")) (flet ((to-ppt (size-in-mm) (/ size-in-mm 1/72 25.4))) (let* ((width (to-ppt 594)) ;; This is A1 page size in mm (height (to-ppt 841)) (margin-top (to-ppt 70)) (margin-bottom (to-ppt 30)) (span (to-ppt 2)) (year-weeks 52) (years 90) (box-size (/ (- (- height (+ margin-top margin-bottom)) (* span (1- years))) years)) (boxes-width (+ (* box-size year-weeks) (* span (1- year-weeks)))) (boxes-height (+ (* box-size years) (* span (1- years)))) ;; horizontal margin depends on box size, ;; because we need to center them (margin-h (/ (- width boxes-width) 2)) (box-radius (/ box-size 3)) (helvetica (pdf:get-font "Helvetica"))) (pdf:with-document () (pdf:with-page (:bounds (rutils:vec 0 0 width height)) ;; For debug ;; (pdf:rectangle margin-h margin-bottom ;; boxes-width ;; boxes-height ;; :radius box-radius) (loop for year from 0 below years do (loop for week from 0 below year-weeks for x = (+ margin-h (* week (+ box-size span))) for y = (+ margin-bottom (* year (+ box-size span))) do (pdf:rectangle x y box-size box-size :radius box-radius))) ;; The title (pdf:draw-centered-text (/ width 2) (+ margin-bottom boxes-height ;; space between text and boxes in mm (to-ppt 15)) "LIFE CALENDAR" helvetica ;; font-size in mm (to-ppt 30)) ;; Labels for weeks (let ((font-size ;; We want labels to be slightly smaller than boxes (* box-size 2/3))) (pdf:draw-right-text (+ margin-h (/ box-size 4)) (+ margin-bottom boxes-height ;; space between text and boxes in mm (to-ppt 10)) "Weeks of the year" helvetica font-size) (loop for week below year-weeks do (pdf:draw-centered-text (+ margin-h (/ box-size 2) (* week (+ box-size span))) (+ margin-bottom boxes-height ;; space between text and boxes in mm (to-ppt 3)) (rutils:fmt "~A" (1+ week)) helvetica font-size)) ;; Labels for years (pdf:with-saved-state (pdf:translate (- margin-h (to-ppt 10)) (- (+ margin-bottom boxes-height) (/ box-size 4))) (pdf:rotate 90) (pdf:draw-left-text 0 0 "Years of your life" helvetica font-size)) (loop for year below years do (pdf:draw-left-text (- margin-h ;; space between text and boxes in mm (to-ppt 3)) (+ margin-bottom (/ box-size 4) (* year (+ box-size span))) (rutils:fmt "~A" (- years year)) helvetica font-size)) ;; The Question. (pdf:draw-left-text (- width margin-h) (- margin-bottom (to-ppt 10)) "Is this the End?" helvetica (* font-size 2)) (pdf:close-and-stroke))) (pdf:write-document filename)))))

Here is how result will look like:

The PDF can be downloaded here.

This program demonstrates a few features of cl-pdf:

  • ability to set page size;
  • text drawing and rotation;
  • font manipulation.

There are a lot more features but all of them aren't documented, only several examples :(

GitHub shows 4 forks with some patches. And some of them are turned into a pull-request, but maintainer is inactive on the GitHub since 2019 :(


Planet Lisp | 28-Oct-2020 21:49

Alexander Artemenko: cl-async-await

This library implements the async/await abstraction to make it easier to write parallel programs.

Now we'll turn "dexador" http library calls into async and will see if we can parallel 50 requests to the site which response in 5 seconds.

To create a function which can return a delayed result, a "promise", we have to use cl-async-await:defun-async:

POFTHEDAY> (cl-async-await:defun-async http-get (url &rest args) (apply #'dexador:get url args))

Now let's call this function. When called it returns a "promise" object not the real response from the site:

POFTHEDAY> (http-get "https://httpbin.org/delay/5") #

Now we can retrieve the real result, using cl-async-await:await function:

POFTHEDAY> (cl-async-await:await *) "{ \"args\": {}, \"data\": \"\", \"files\": {}, \"form\": {}, \"headers\": { \"Accept\": \"*/*\", \"Content-Length\": \"0\", \"Host\": \"httpbin.org\", \"User-Agent\": \"Dexador/0.9.14 (SBCL 2.0.8); Darwin; 19.5.0\", \"X-Amzn-Trace-Id\": \"Root=1-5f9732d6-148ee9a305fab66c26a2dbfd\" }, \"origin\": \"188.170.77.131\", \"url\": \"https://httpbin.org/delay/5\" } " 200 (8 bits, #xC8, #o310, #b11001000) # # #>

If we look a the promise object again, we'll see it has a state now:

POFTHEDAY> ** # https://httpbin.org/delay/5 #>) >

Ok, it is time to see if we can retrieve results from this site in parallel. To make it easier to test speed, I'll wrap all code into the separate function.

The function returns the total number of bytes in all 50 responses:

POFTHEDAY> (defun do-the-test () (let ((promises (loop repeat 50 collect (http-get "https://httpbin.org/delay/5" :use-connection-pool nil :keep-alive nil)))) ;; Now we have to fetch results from our promises. (loop for promise in promises for response = (cl-async-await:await promise) summing (length response)))) POFTHEDAY> (time (do-the-test)) Evaluation took: 6.509 seconds of real time 2.496912 seconds of total run time (1.672766 user, 0.824146 system) 38.36% CPU 14,372,854,434 processor cycles 1,519,664 bytes consed 18300

As you can see, the function returns in 6.5 seconds instead of 250 seconds! This means cl-async-await works!

The only problem I found was this concurrency issue:

https://github.com/j3pic/cl-async-await/issues/3

But probably it is only related to Dexador.


Planet Lisp | 26-Oct-2020 21:48

Alexander Artemenko: parseq

With this library, you can write parsers to process strings, lists and binary data!

Let's take a look at one of the examples. It is a parser for the dates from RFC 5322. This format is used in email messages:

Thu, 13 Jul 2017 13:28:03 +0200

Parser consist of rules, combined in different ways. We'll go through the parser's parts one by one.

This simple rule matches one space character:

POFTHEDAY> (parseq:defrule FWS () #\space) ;; It matches if string contains one space POFTHEDAY> (parseq:parseq 'FWS " ") #\ T ;; But not on string from many spaces: POFTHEDAY> (parseq:parseq 'FWS " ") NIL NIL ;; And of cause not on some other string POFTHEDAY> (parseq:parseq 'FWS "foo") NIL NIL

The next rule we need is the rule to parse hours, minutes and seconds. These parts have two digits and we'll use rep expression to specify how many digits matches the rule:

POFTHEDAY> (parseq:defrule hour () (rep 2 digit)) POFTHEDAY> (parseq:parseq 'hour "15") (#\1 #\5) T

See, this rule returns digits as the list! But to make it useful, we need the integer. Parseq rules support different kinds of transformations. They can are optional and can be specified like this:

;; This transformation will return as the string instead of list: POFTHEDAY> (parseq:defrule hour () (rep 2 digit) (:string)) POFTHEDAY> (parseq:parseq 'hour "15") "15" T ;; Now we'll add a transformation from string to integer: POFTHEDAY> (parseq:defrule hour () (rep 2 digit) (:string) (:function #'parse-integer)) POFTHEDAY> (parseq:parseq 'hour "15") 15 (4 bits, #xF, #o17, #b1111) T

We'll define the minute and second rules the same way.

The next rule matches the abbreviated day of the week. It combines other rules or terms using or expression:

POFTHEDAY> (parseq:defrule day-of-week () (or "Mon" "Tue" "Wed" "Thu" "Fri" "Sat" "Sun")) POFTHEDAY> (parseq:parseq 'day-of-week "Friday") NIL NIL POFTHEDAY> (parseq:parseq 'day-of-week "Fri") "Fri" T ;; The same way we define a rule for month abbrefiation POFTHEDAY> (parseq:defrule month () (or "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"))

A little bit complex rule is used for matching timezone. Timezone is a string from 4 digits prefixed by plus or minus sign. We'll combine this knowledge using or/and expressions and will use option :string to get results as a single string:

POFTHEDAY> (parseq:defrule zone () (and (or "+" "-") (rep 4 digit)) (:string)) POFTHEDAY> (parseq:parseq 'zone "0300") NIL NIL POFTHEDAY> (parseq:parseq 'zone "+0300") "+0300" T POFTHEDAY> (parseq:parseq 'zone "-0300") "-0300" T

Now let's return to the time of day parsing. According to the RFC, seconds part is optional. Parseq has an expression ? to match optional rules.

Here is how a rule matching the time of day will look like:

POFTHEDAY> (parseq:defrule time-of-day () (and hour ":" minute (? (and ":" second)))) POFTHEDAY> (parseq:parseq 'time-of-day "10:31:05") (10 ":" 31 (":" 5)) T

To make the rule return only digits we have to use :choose transform. Choose extracts from results by index. You can specify index as an integer or as a list if you need to extract the value from the nested list:

POFTHEDAY> (parseq:defrule time-of-day () (and hour ":" minute (? (and ":" second))) (:choose 0 2 '(3 1))) POFTHEDAY> (parseq:parseq 'time-of-day "10:31:05") (10 31 5) ;; Seconds are optional because of ? expression: POFTHEDAY> (parseq:parseq 'time-of-day "10:31") (10 31 NIL) T ;; This (:choose 0 2 '(3 1)) is equivalent to: POFTHEDAY> (let ((r '(10 ":" 31 (":" 5)))) (list (elt r 0) (elt r 2) (elt (elt r 3) 1))) (10 31 5)

Another interesting transformation rule is :flatten. It is used to "streamline" result having nested structure and used in this rule which matches both time of day and timezone:

;; Without flatten we'll get nested lists: POFTHEDAY> (parseq:defrule time () (and time-of-day FWS zone) (:choose 0 2)) POFTHEDAY> (parseq:parseq 'time "10:31 +0300") ((10 31 NIL) "+0300") POFTHEDAY> (parseq:defrule time () (and time-of-day FWS zone) (:choose 0 2) (:flatten)) ;; Pay attention, :flatten removes nils: POFTHEDAY> (parseq:parseq 'time "10:31 +0300") (10 31 "+0300") T

Now, knowing how rules are combined and data is transformed, you will be able to read rest rules yourself:

POFTHEDAY> (parseq:defrule day () (and (? FWS) (rep (1 2) digit) FWS) (:choose 1) (:string) (:function #'parse-integer)) POFTHEDAY> (parseq:defrule year () (and FWS (rep 4 digit) FWS) (:choose 1) (:string) (:function #'parse-integer)) POFTHEDAY> (parseq:defrule date () (and day month year)) (parseq:defrule date-time () (and (? (and day-of-week ",")) date time) (:choose '(0 0) 1 2) (:flatten))

Another cool Parseq's feature is an ability to debug parser execution. Now I'll turn on this debug mode and parse a string:

POFTHEDAY> (parseq:trace-rule 'date-time :recursive t) POFTHEDAY> (parseq:parseq 'date-time "Thu, 13 Jul 2017 13:28:03 +0200") 1: DATE-TIME 0? 2: DAY-OF-WEEK 0? 2: DAY-OF-WEEK 0-3 -> "Thu" 2: DATE 4? 3: DAY 4? 4: FWS 4? 4: FWS 4-5 -> #\ 4: FWS 7? 4: FWS 7-8 -> #\ 3: DAY 4-8 -> 13 3: MONTH 8? 3: MONTH 8-11 -> "Jul" 3: YEAR 11? 4: FWS 11? 4: FWS 11-12 -> #\ 4: FWS 16? 4: FWS 16-17 -> #\ 3: YEAR 11-17 -> 2017 2: DATE 4-17 -> (13 "Jul" 2017) 2: TIME 17? 3: TIME-OF-DAY 17? 4: HOUR 17? 4: HOUR 17-19 -> 13 4: MINUTE 20? 4: MINUTE 20-22 -> 28 4: SECOND 23? 4: SECOND 23-25 -> 3 3: TIME-OF-DAY 17-25 -> (13 28 3) 3: FWS 25? 3: FWS 25-26 -> #\ 3: ZONE 26? 3: ZONE 26-31 -> "+0200" 2: TIME 17-31 -> (13 28 3 "+0200") 1: DATE-TIME 0-31 -> ("Thu" 13 "Jul" 2017 13 28 3 "+0200") ("Thu" 13 "Jul" 2017 13 28 3 "+0200") T

We can improve this parser by using :function transformation to return a local-time:timestamp. First, let's redefine rule for matching the month and make it return the month number:

POFTHEDAY> (parseq:defrule january () "Jan" (:constant 1)) POFTHEDAY> (parseq:defrule february () "Feb" (:constant 2)) POFTHEDAY> (parseq:defrule march () "Mar" (:constant 3)) POFTHEDAY> (parseq:defrule april () "Apr" (:constant 4)) POFTHEDAY> (parseq:defrule may () "May" (:constant 5)) POFTHEDAY> (parseq:defrule june () "Jun" (:constant 6)) POFTHEDAY> (parseq:defrule july () "Jul" (:constant 7)) POFTHEDAY> (parseq:defrule august () "Aug" (:constant 8)) POFTHEDAY> (parseq:defrule september () "Sep" (:constant 9)) POFTHEDAY> (parseq:defrule october () "Oct" (:constant 10)) POFTHEDAY> (parseq:defrule november () "Nov" (:constant 11)) POFTHEDAY> (parseq:defrule december () "Dec" (:constant 12)) POFTHEDAY> (parseq:defrule month () (or january february march april may june july august september october november december)) POFTHEDAY> (parseq:parseq 'month "Sep") 9 (4 bits, #x9, #o11, #b1001) T

Next, we need to reimplement the rule matching a timezone to make it return local-time:timezone.

We'll be using an advanced technique of binding variables to pass value from one rule to another, because I want to store the timezone as a string and to parse it's hour and minute parts simultaneously.

To accomplish this task, we have to divide or timezone matching rule into two. The first rule will match it as a string of sign and four digits. Then it will save the result into an external variable and exit with a nil result to give a chance to execute the second rule:

POFTHEDAY> (parseq:defrule zone-as-str () (and (or #\+ #\-) (rep 4 digit)) (:string) (:external zone-as-str) ;; Save the value into a variable: (:lambda (z) (setf zone-as-str z)) ;; and just exit: (:test (z) (declare (ignore z)) nil))

Now we'll redefine our zone rule to call zone-as-str first and then to parse the same text again, this time as hours and minutes. As the final step, it creates a local-time:timezone object:

POFTHEDAY> (parseq:defrule zone () (or zone-as-str (and (or #\+ #\-) hour minute)) (:let zone-as-str) (:lambda (sign hour minute) (local-time::%make-simple-timezone zone-as-str zone-as-str ;; This is an offset in seconds: (+ (* (ecase sign (#\+ 1) (#\- -1)) hour 3600) (* minute 60))))) ;; Here is the execution trace: POFTHEDAY> (parseq:parseq 'zone "+0300") 1: ZONE 0? 2: ZONE-AS-STR 0? 2: ZONE-AS-STR -| 2: HOUR 1? 2: HOUR 1-3 -> 3 2: MINUTE 3? 2: MINUTE 3-5 -> 0 1: ZONE 0-5 -> # # T

Now we need to redefine the original date-time rule, to create local-time:timestamp as the result:

POFTHEDAY> (parseq:parseq 'date-time "Thu, 13 Jul 2017 13:28:03 +0200") ("Thu" 13 7 2017 13 28 3 #) T POFTHEDAY> (parseq:defrule date-time () (and (? (and day-of-week ",")) date time) (:choose '(1 2) ; year '(1 1) ; month '(1 0) ; day '(2 0) ; hour '(2 1) ; minute '(2 2) ; second '(2 3)) ; timezone (:lambda (year month day hour minute second timezone) (local-time:encode-timestamp 0 ; nanoseconds (or second 0) ; secs are optional minute hour day month year :timezone (or timezone local-time:*default-timezone*)))) POFTHEDAY> (parseq:parseq 'date-time "Thu, 13 Jul 2017 13:28:03 +0200") @2017-07-13T14:28:03.000000+03:00 T

I've got a different value for the time because local-time prints timestamp in my timezone which is UTC+3.

The cool feature of the Parseq is its ability to work with any data, including binary. This way it can be used to parse binary formats.

As an example of parsing binary data, Parseq includes this parser rules for working with PNG image format:

https://github.com/mrossini-ethz/parseq/blob/master/examples/png.lisp

There are other interesting features. Please, read the docs to learn more.

If you are aware of other parsing libraries which worth to be written about, let me know in the comments.


Planet Lisp | 23-Oct-2020 22:47

Micha? Herda: The Common Lisp Condition System is out now

After just a bit more than six months, my first programming book is out and generally available. I hope that it works well for everyone who wants to explore the condition system, how it differs from standard exception-throwing systems in other programming languages, how to implement it and how to leverage it in real-world scenarios.

Links:

  • Apress - for buying and general information
  • Amazon - for buying and general information
  • GitHub - includes the full source code from the book and the online-only Appendix E ("Discussing the Common Lisp Condition System")

Planet Lisp | 22-Oct-2020 18:21

Alexander Artemenko: pzmq

ZeroMQ is a networking library. It is not a message broker and it will not run tasks for you. Instead, it provides simple primitives for different network patterns.

With ZeroMQ you can easily implement these patterns: Request-Response, Pub-Sub, Push-Pull.

I found 3 CL systems implementing bindings to ZeroMQ:

I know, names of the repositories, CL systems and packages are all different. That is the HELL :(

There is also at least two different versions of the zmq:

  • First one is referred by https://www.cliki.net/cl-zmq and included into Quicklisp. But examples from the ZeroMQ Guide not work with this zmq because msg-data-as-is function is absent.
  • The second one is https://github.com/tsbattman/cl-zmq and seems it is the version, used in ZeroMQ Guide. But it is not in the Quicklisp (yet or anymore).

Anyway, both of them are stale and didn't get updates 7-8 years. They are using the old 3.2 version of ZeroMQ. Today we'll talk about pzmq.

PZMQ has some activity in the repository and uses ZeroMQ 4. It does not have docs but it has some examples, ported from the ZeroMQ Guide.

I slightly modified the examples code, to make the output more readable when client and server are running from one REPL.

This snippet shows the server's code. It listens on the 5555 port and blocks until a message received, then responds and waits for another message:

POFTHEDAY> (defun hwserver (&optional (listen-address "tcp://*:5555")) (pzmq:with-context nil ; use *default-context* (pzmq:with-socket responder :rep (pzmq:bind responder listen-address) (loop (write-line "SERVER: Waiting for a request... ") (format t "SERVER: Received ~A~%" (pzmq:recv-string responder)) (sleep 1) (pzmq:send responder "World")))))

The client does the opposite - it sends some data and waits for the response. Depending on the pattern you use, you have to set socket types. For the server, we used :rep (reply) and for client we are using :req (request).

POFTHEDAY> (defun hwclient (&optional (server-address "tcp://localhost:5555")) (pzmq:with-context (ctx :max-sockets 10) (pzmq:with-socket (requester ctx) (:req :affinity 3 :linger 100) ;; linger is important in case of (keyboard) interrupt; ;; see http://api.zeromq.org/3-3:zmq-ctx-destroy (write-line "CLIENT: Connecting to hello world server...") (pzmq:connect requester server-address) (dotimes (i 3) (format t "CLIENT: Sending Hello ~d...~%" i) (pzmq:send requester "Hello") (write-string "CLIENT: Receiving... ") (write-line (pzmq:recv-string requester))))))

Here is what we'll see when running the server in the background and starting the client in the REPL:

POFTHEDAY> (defparameter *server-thread* (bt:make-thread #'hwserver)) SERVER: Waiting for a request... POFTHEDAY> (hwclient) CLIENT: Connecting to hello world server... CLIENT: Sending Hello 0... CLIENT: Receiving... Hello SERVER: Waiting for a request... World CLIENT: Sending Hello 1... CLIENT: Receiving... Hello SERVER: Waiting for a request... World CLIENT: Sending Hello 2... CLIENT: Receiving... Hello SERVER: Waiting for a request... World NIL

What is next?

Read about Pub-Sub and Push-Pull patterns at the ZeroMQ Guide and try to port them on pzmq.

Also, it would be cool to port all Common Lisp examples from the unsupported library to the pzmq and to send a pull-request.

By the way, there is at least one cool project, which already uses pzmq to connect parts written in Common Lisp and Python. It is recently reviewed common-lisp-jupyter library.

To conclude, this library definitely should be tried if you are going to implement a distributed application! Especially if it will interop with parts written in other languages than Common Lisp.


Planet Lisp | 19-Oct-2020 20:47

Alexander Artemenko: quickfork

This is an interesting system which provides information about other systems sources. Also, it is able to show commands, necessary to clone libraries into the local-projects dir.

This system is not in Quicklisp yet, but it can be installed from Ultralisp or by clone into some directory like /quicklisp/local-projects~.

Also, to make it work, you have to clone quicklisp-projects repository somewhere. This repository contains metadata about all projects in the Quicklisp:

POFTHEDAY> (uiop:run-program "git clone https://github.com/quicklisp/quicklisp-projects /tmp/projects") POFTHEDAY> (setf quickfork::*projects-directory* "/tmp/projects/projects")

An interesting thing happens right after you load quickfork system. It installs a few hooks into Quicklisp and ASDF and begins tracking the systems which are installed during the ql:quickload:

POFTHEDAY> (ql:quickload :dexador) To load "dexador": Load 14 ASDF systems: alexandria asdf babel bordeaux-threads cffi cffi-grovel cl-ppcre cl-utilities flexi-streams local-time split-sequence trivial-features trivial-gray-streams uiop Install 17 Quicklisp releases: chipz chunga cl+ssl cl-base64 cl-cookie cl-reexport dexador fast-http fast-io proc-parse quri smart-buffer static-vectors trivial-garbage trivial-mimes usocket xsubseq ; Fetching # ; 83.84KB ... ; Loading "dexador" ... [package cl+ssl].................................. [package dexador]. Systems compiled by QL: ("proc-parse" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/proc-parse-20190813-git/") ("xsubseq" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/xsubseq-20170830-git/") ... ("dexador" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/dexador-20200427-git/") Systems loaded by QL: ("proc-parse" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/proc-parse-20190813-git/") ("xsubseq" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/xsubseq-20170830-git/") ... ("dexador" #P"/Users/art/poftheday/.qlot/dists/quicklisp/software/dexador-20200427-git/") Systems installed by QL: "usocket" "trivial-mimes" ... "chipz" "dexador" Inspect ql:*compiled-systems*, ql:*loaded-systems*, and ql:*installed-systems* for more info. (:DEXADOR)

Also, there is a function quickfork::make-clone-commands which prints which commands should be executed in command-line to clone given system and all its dependencies.

Sadly, quickfork::make-clone-commands fails on dexador with some strange errors. You will need my fix, to make it work like this:

CL-USER> (quickfork::make-clone-commands :dexador) git clone "https://github.com/sharplispers/split-sequence.git" git clone "https://github.com/sionescu/static-vectors.git" git clone "https://github.com/sionescu/bordeaux-threads.git" git clone "https://github.com/fukamachi/dexador.git" git clone "https://github.com/fukamachi/fast-http.git" git clone "https://gitlab.common-lisp.net/alexandria/alexandria.git" git clone "https://github.com/fukamachi/proc-parse.git" git clone "https://github.com/cl-babel/babel.git" git clone "https://github.com/trivial-features/trivial-features.git" git clone "https://github.com/fukamachi/xsubseq.git" git clone "https://github.com/fukamachi/smart-buffer.git" git clone "https://github.com/trivial-gray-streams/trivial-gray-streams.git" git clone "https://github.com/fukamachi/quri.git" git clone "https://github.com/rpav/fast-io.git" git clone "https://github.com/fukamachi/cl-cookie.git" git clone "https://github.com/dlowe-net/local-time.git" git clone "https://github.com/Shinmera/trivial-mimes.git" git clone "https://github.com/sharplispers/chipz.git" git clone "https://github.com/takagi/cl-reexport.git" git clone "https://github.com/cl-plus-ssl/cl-plus-ssl.git" git clone "https://github.com/lmj/global-vars.git" git clone "https://github.com/trivial-garbage/trivial-garbage.git" Non-git dependencies: ("cl-utilities" :HTTPS "https://common-lisp.net/project/cl-utilities/cl-utilities-latest.tar.gz") NIL ("flexi-streams" :EDIWARE-HTTP "flexi-streams") ("uiop" :HTTPS "https://common-lisp.net/project/asdf/archives/uiop.tar.gz") ("cffi" :HTTPS "https://common-lisp.net/project/cffi/releases/cffi_latest.tar.gz") ("chunga" :EDIWARE-HTTP "chunga") ("cl-ppcre" :EDIWARE-HTTP "cl-ppcre") ("cl-base64" :KMR-GIT "cl-base64") ("usocket" :HTTPS "https://common-lisp.net/project/usocket/releases/usocket-latest.tar.gz")

Suddenly, I've remembered another similar project: ql-checkout.

Probably, yesterday we'll see how it works!


Planet Lisp | 16-Oct-2020 21:32

Quicklisp news: October 2020 Quicklisp dist update now available

 New projects

Updated projects: adopt, agnostic-lizard, algae, april, base-blobs, bdef, beast, binary-io, bobbin, bodge-blobs-support, bodge-chipmunk, bodge-glad, bodge-glfw, bodge-nuklear, bodge-ode, bodge-openal, bodge-sndfile, chancery, chanl, check-bnf, chipmunk-blob, chirp, ci-utils, cl-async-await, cl-base64, cl-buchberger, cl-capstone, cl-cffi-gtk, cl-collider, cl-covid19, cl-csv, cl-digraph, cl-flow, cl-forms, cl-gamepad, cl-grip, cl-ipfs-api2, cl-kaputt, cl-liballegro-nuklear, cl-markless, cl-marshal, cl-messagepack, cl-mixed, cl-muth, cl-naive-store, cl-netpbm, cl-patterns, cl-pcg, cl-portaudio, cl-pslib, cl-readline, cl-rss, cl-sdl2, cl-semver, cl-setlocale, cl-ssh-keys, cl-webkit, class-options, claw, clj, closer-mop, clsql, clsql-local-time, clunit2, com-on, common-lisp-jupyter, croatoan, crypto-shortcuts, dartscltools, data-lens, djula, eclector, femlisp, flexichain, font-discovery, gadgets, gendl, glad-blob, glfw-blob, glkit, golden-utils, gtirb, harmony, hu.dwim.def, hu.dwim.presentation, hu.dwim.quasi-quote, hu.dwim.rdbms, hu.dwim.web-server, hyperluminal-mem, hyperobject, jingoh, kmrcl, lack, literate-lisp, markup, mcclim, messagebox, meta-sexp, mgl-pax, millet, mito-attachment, mmap, mutility, nanovg-blob, nuklear-blob, ode-blob, openal-blob, origin, overlord, paren6, petalisp, pngload, postmodern, puri, py4cl2, reversi, ryeboy, sc-extensions, sel, serapeum, shasht, simple-flow-dispatcher, sly, sndfile-blob, stmx, stumpwm, ten, trivial-do, ucons.

Removed projects: cl-piglow, cl-proj, clutz, roan, scalpl.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!


Planet Lisp | 16-Oct-2020 15:57

Joe Marshall: Apropos of Nothing
Lisp programmers are of the opinion that [] and {} are just () with delusions of grandeur.
Planet Lisp | 16-Oct-2020 00:18

RSS and Atom feeds and forum posts belong to their respective owners.