In the fictional universe of H.P. Lovecraft, a Shoggoth is one of many unfathomable horrors populating the universe. In a December 2022 tweet, Twitter user tetraspace introduced the analogy of viewing a large language model like GPT-3 as a Shoggoth, and the “friendly user interface” provided by applications such as the OpenAI chatbot ChatGPT as a smiley we put on the Shoggoth to make it understandable.
I like the analogy so much I asked my daughter to draw me a richer image that also depicts my dystopian description of the future of “basic programming”: humans busy designing new smileys to put on the eldritch Shoggoth.
Because of temporal value shift, cultural variations, and bit rot, the smiley faces need to be regularly replaced. For instance they need to relearn which topics are taboo and where, or that we’ve always been at war with Eurasia. Programmers, engineers and digital designers as worker bees in a construction site for flimsy, ephemeral masks for a demon at the Ministry of Truth. How’s what for nihilism!
Image created by Anna Husfeldt, released under CC-BY SA 3.0.
Trigonometric function is pretty dry. (3) SEC, double definition
Denounce operator in vector analysis and put it away. (6) DELETE = DEL + ATE (vb., past tense)
49 litres of sick. (3) ILL = IL (roman numerals for 49) + L
Turing machine’s second network. (4) ALAN = A (second letter in MACHINE) + LAN
Academic degree supports unlimited games for simple minds in the US. (5) AMEBA (US spelling of AMOEBA) = [G]AME[S] + BA
Open access journal rejected final proof in distance estimate. (4) AFAR = AJAR – J + F (last letter of PROOF).
Conceal often sample space. (4) LOFT, sample of “…L OFT…”
Objectively, we are American. (2) US = objective form of “we”
It negates where you drink. (3) BAR, double definition, the bar us used to negate a variable
The letters missing from 4 of the clues, indicated by […] above, are K A R P, a homophone of CARP (“sounds like nitpicking”). The answers to those clues are HARRISON, MEYER, ULLMAN, and FLOYD. The six members of the first STOC programme committee were Harrison, Meyer, Ullman, Floyd, Karp, and Hartmanis. The answer to the entire puzzle is therefore HARTMANIS, additionally indicated as “confused with H(ungarian) MARTIANS” = HMARTIANS*.
Themed cryptic crossword I compiled (over two fun evenings) for the STOC 2022 conference. Lots of short words, to make it super-friendly.
Special instructions. Four of six people that form a group related to our meeting appear in the grid, clued for their namesakes. Alas, each of these clues is missing a letter. This sound like nit-picking, but the letters form the name of the fifth person. The sixth person of the group, not to be confused with the Hungarian “Martians,” is the puzzle answer.
Across 1. Spoke first at STOC, with handouts. (4) 4. Laugh at Romanian jail without paking for Ford, say. (8) 10. Danes maybe cycling for cake. (7) 11. Squared dodecahedron contains root. (4) 12. With competence, like Niels Henrik? (4) 14. Trouble sounds like beer. (3) 16. Lnsky impales most of suricate with tip of yari. (5) 17. Taster confused desserts. (6) 19. Programmer is a father. (3) 21. Some alternating state. (3) 22. Limaye without an almond, say. (3) 23. Bullet point keeps her in France. (6) 26. Assholes insert Landau symbol into abstracts after second review. (7) 29. Logic from the Horn of Africa. (2) 31. Like, with probability 1. (2) 34. Complex zero first encountered by lady friend. (7) 38. Partly hesitant at ternary bit. (6) 39. Supporting structure turns semi-liquid. (3) 41. Everything Arthur left to end of protocol. (3) 43. Unbounded fan-in polylogarithmic depth circuit model is fake. (3) 44. Maybe Tracey’s luxury car runs out of olynomial time. (6) 45. Pin dress you ordered last Friday initially returned. (5) 46. Relationship uncovered travel document. (2-1) 47. Be certain about last Pfaffian orientation. (4) 48. Detect conference cost size. (3) 49. Matching for horses? (6) 50. Collect information about first woman embracing the state. (8) 51. Central to better AI data storage technology. (4)
Down 1. Safer testable codes? (4, 4) 2. Maybe Arthur without Merlin is even worse … (5) 3. … definition of time than deterministic Arthur–Yaroslav. (3) 4. Cover us about first sign of integrality gap. (6) 5. Publish, but not ultimately sure about polynomial-time recurrence. (7) 6. 49 subsets the setter handles. (6) 7. Food at indiscrete algorithms conference held around Yale’s entrance. (4) 8. Revealed no deep confusion. (6) 9. Not having an arbitrary number of children. (4) 13. In Related Work section, over-inflate scientific contribution of Rabin in data structure. (4) 15. Iceland exists. (2) 18. Cuckoo hashing incurs five seconds. (3) 20. Function of what is owed, I hear. (3) 24. Cut off measure of computer performance, not taking sides. (3) 25. Two sigmas above the mean, or about ten delta. (8) 27. Coattail at breakfast? (3) 28. Twisted polygon in tessellation supporting functors. (7) 30. Tree parent has two left edges. (6) 32. Temporary visitor betrays topless arrangement. (6) 33. Trigonometric function is pretty dry. (3) 35. Denounce operator in vector analysis and put it away. (6) 36. 49 litres of sick. (3) 37. Turing machine’s second network. (4) 40. Academic degree supports unlimited games for simple minds in the US. (5) 41. Open access journal rejected final proof in distance estimate. (4) 42. Conceal often sample space. (4) 44. Objectively, we are American. (2) 47. It negates where you drink. (3)
Summary of my keynote talk on 4 November 2021 at the eponymous Norwegian national conference Nasjonal konferanse: Digitalisering i høyere utdanning 2021, hosted at Bergen University and organised with the Norwegian Directorate for Higher Education and Skills. The summary is somewhat disjointed and includes some of the slides.
1. Digitisation in Higher Education (summary)
The first part of the talk is about the potential use of digital technology to improve learning outcome in specific contexts. When I gave the talk I showed a bunch of ephemeral examples that I had used myself, or which seemed otherwise cool, instructive, useful, or exciting in late 2021. These examples won’s stand the test of time, so I won’t repeat them here. A well-known and old example that everybody knows is the language learning application Duolingo.
Benefits include immediate feedback, scalability, transparency of assessment, accessibility, spaced repetition, and individual pacing or tracking of material. Common to the contexts where digital technology seems to be largely beneficial are
learning outcomes can be validly tracked and assessed,
it makes sense to design a clear progression or scaffolding through the curriculum,
relevant feedback can be provided at scale and automated.
More problematic are assistive technologies, often using machine learning or other forms of decision-making processes collectively known as artificial intelligence, that ensure surface conformity of written material. These have clear benefits for authoring but also equally clear negative consequences for assessment. Modern word processors allow the production of professional-looking prose and automate surface markers of quality (layout, grammar, style, phrasing, vocabulary), which traditionally were used as proxies for content in assessment of learning outcomes; online natural language processing services like Google Translate invalidate traditional forms of language proficiency assessment based on written material. In 2021, a beta version of the programming environment GitHub Copilot automatically solves programming exercises from the task description. The same tools that perform plagiarism checks can also plagiarise very well.
So there is stuff that becomes somewhat easier by digitisation (transmission of knowledge), and where we have many ideas for what to do, and stuff that becomes much harder (valid assessment), and where we have very little idea about what to do.
A third theme is stuff that does become easier, but maybe oughtn’t. This is found under the umbrella education data mining – predicting student performance, retention, success, satisfaction, achievement, or dropout rate from harvested data such as age, gender, demographics, religion, socio-economic status, parents’ occupation, interaction with application platform, login frequency, discussion board entries, spelling mistakes, dental records, social media posts, photos, and everything else. The promise of this field—and in particular, machine-learned predictions based on polygenetic scores—is enormous; and there is an emerging literature of success stories in the educational literature. I strongly dislike this trend, but that’s just, like, my opinion.
This summaries the first part, with the foci
what should we not primarily evaluate along the works/fails axis?
There are of course a ton of other problems, which I expanded on in the full talk:
digitisation favours certain disciplines and traditions of instruction
it systematically undermines the authority of educators
the intellectual property rights issues are ill understood and seem to have very little attention from educational administrators
routine student surveillance
2. What the Hard Part Is
I’m pretty confident about part 1 (digitisation for education), and return to a confident stance in part 3 (education for digitisation). In contrast, part 2 is characterised by epistemic humility. It’s my best current shot at what makes the whole topic hard do think about.
Over the years I have learned a lot about “How to think about digitisation” from the DemTech project at IT University of Copenhagen, which looks at digitisation of democratic processes, in particular election technology. “How to digitise elections” is a priori an easier problem than “How to digitise higher education”, because the former is much more well-defined. Still, many of the same insights and phenomena apply.
These problems include a (largely unhelpful) eagerness of various stakeholders to digitise for the sake of it. An acerbic term for this is “magical IT gravy”:
[…] politicians who still think that you can just drench a problem domain in “magical IT gravy” and then things become perfect.
Poul Henning Kamp, 2021
Briefly put, if you digitise domain X then you will digitise your description of X. (A fancy word for this is reification.) Since (for various reasons, including lack of understanding, deliberate misrepresentation, social biases, status, and political loyalty) the description of X is seldom very good, digitisation reform typically deforms X. Since digitisation is seen as adaptive within organisations, it bypasses the converstations that such transformation would normally merit.
Example: Electronic Elections
My ITU colleague Carsten Schürmann showed me this insightful taxonomy:
Let’s use voting as a concrete example to talk us through this. At the bottom is Functionality: voting is to increase a counter by 1 every time somebody casts a vote. This is easy to implement, and easy to digitise; any programmer can write an electronic voting system. Above that is Security: making sure the voting process is free (i.e., secret, private, non-coerced, universally accessible, etc.) This involves much harder computer science, for instance, cryptography. You need to have a pretty solid computer science background to get this right. Above that tier is Validity: how do we ensure that the numbers spat out by the election computer have anything to do with what people actually voted? This is even harder, in particular without allowing coercion by violating secrecy. (Google “vote selling” if you’ve never thought about this.) Finally, after thinking long and hard about all these issues (which by now include social choice theory, logic, several branches of mathematics) you arrive at what elections are actually about: Trust. The overarching goal of voting it so convince the loser that they lost. And this requires that the process is transparent.
Applications at the Functionality stage are easy (if you’re a competent programmer), they are mono-disciplinary, and the happen by themselves because geeks just want to see cool stuff work. The higher up you go in the hierarchy, the more disciplines need to be included, so the requirement for inter-disciplinary work increases. An organisational super-structure (such as a university) is needed to incentivise and coördinate the things at the top.
What is Higher Education About?
To translate the above model from voting to Higher Education, the Functionality part is transmission of knowledge: all the cool and exciting ideas I mentioned in part 1. Security corresponds to preserving privacy of learners throughout their educational history; this already conflicts with both educational tracking (e.g., for individualised progression) and reporting (e.g., for credentials). The first steps, probably in the wrong direction, can be seen by universities in Europe reacting to the European GDPR legislation, which already takes up a significant amount of organisational resources. This layer is correctly seen as annoying, depressing, time-consuming and boring, but still outshines in enthusiasm the next layer: Validity. The very features of information technology that make it incredibly useful for transmission of knowledge (speed, universality, faithful copying, depersonalised expression, accessibility, bandwidth) make it incredibly difficult to perform valid assessments, in particular in high-stakes, large-scale exams of universal skills. This is because digital technology makes it it is very easy to misrepresent authorship and originality. (Think of playing the violin, drawing, programming, parallel parking, mathematics, translating between languages—which digital artefacts would constitute proof of proficiency?)
Yet this is all still manageable, and some universities or educational systems may even choose to get this right. For instance, I think I know that to do.
But the really Hard Problem of Digitsation is the one at the top of the hierarchy: What is higher education even about?
I don’t know the answer to that.
Here are some possible answers. Maybe they’re all correct:
The narrative I tell myself is very much the first one, the capital-E Enlightenment story of knowledge transmission, critical thinking, and a marketplace of ideas. We can call this the Faculty view, or Researcher’s, or Teacher’s. The Economist’s view in the 2nd row is, thankfully, largely consistent with that; it’s no mystery that the labour market in knowledge economies pays a premium for knowledgeable graduates—the two incentives are somewhat aligned. (It does, however, make a big difference in how important transmission of knowledge and validity of assessment are in relation to each other. Again, “why not both?” and there is no conflict. Critical thinking versus conformity is a much more difficult tension to resolve.)
Still, the question whether Higher Education creates human capital (such as knowledge) or selects for human capital is (and remains) very hard to talk about among educators. Very few teachers, including myself, are comfortable with the idea that education favours the ability to happily and unquestioningly perform meaningless, boring, and difficult tasks exactly like the boss/teacher wanted (but was unable/unwilling to specify honestly or clearly) in dysfunctional group, and that it selects for these traits by attrition. Mind you, it may very well be true, but let’s not digitise that. (I strongly recommend Bryan Caplan’s excellent, though-provoking, and disturbing book The Case Against Education to everybody in education.)
The two other perspectives on what Higher Education is about (Experiental, Social, both are Student perspectives) are much more difficult to align with the others. Students-as-consumers-of-an-experience (“University is Live Role Playing”) and students-as-social-primates (“University is a Meet Market”) have very different utility functions than both Faculty and Economists.
The point is: if we digitise the phenomenon of Higher Education, we need to understand what it’s about. (Or maybe what we want it to be about.) Currently, Higher Education seems to do a passable job of honouring all four explanations. I guess my position is largely consistent with Chesterton’s Fence:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
G.K. Chesterton, The Thing (1929)
Even if we (as educators and educational managers) were to understand what education is about, and even if we were agree on good ideas for how to proceed, none of us lives in a world where good decisions are just made. We have to communicate with stakeholders who are themselves beholden to other stakeholders, and need to understand that these individuals live in a complicated social or organisational space where they are evaluated on their individual, institutional, or ideological loyalty. Not on the quality of their decisions.
We must expect most organisations, including most universities, to make decisions about digitisation in higher education that are based on much more important factors (to the stakeholders) than “whether it works”. Given the choice between trusting the advice of a starry-eyed snake-oil salesman (or, in our case, a magical IT gravy salesman) and a reasonable, competent, experienced, responsible, and careful teacher, most responsible organisations will choose the former.(!) The feedback loops in education are simply too long to incentivise any other behaviour.
3. Educating for Digitisation
The last part of the talk—about how to prepare higher education for digitisation—is really short. Because (I believe) I know what to do. I consider it a solved problem.
Here is the list of things to do:
Do teach basic programming.
Basic programming is teachable, it is mostly learnable by the higher education target demographic, is easily diagnosed at scale, works extremely well with digitisation-as-a-tool, is extremely useful now (in 2021), applies to many domains, will likely be even more useful in future, and in more domains, is highly compatible with labour market demands, and an enabling tool in all academic disciplines.
If you’re on board with this, my best advice for anyone with the ambition to teach basic programming is to adopt the following curriculum:
Point 2 is really important. In particular, my advice is to not add contents like applications, history of computing, ethics, societal aspects, reflection, software design, project management, machine learning, app development, human–computer interaction, etc. Mind you, these topics are extremely interesting, and I love many of them dearly. (And they should all be taught!) But unless you are a much better teacher than me, you will not be able to make the basicprogramming part work.
That’s it. The full talk had better flow, included some hands-on demonstrations of cool applications, and had more jokes. I am really grateful to the organisers for giving me the chance to think somewhat coherently about these issues, and had a bunch of very fruitful and illuminating conversations with colleagues both before and after the talk. I learned a lot.
Presentation on 1 November 2021 at the annual meeting of the Danish Gymnasiernes, Akademiernes og Erhvervsskolernes Biblioteksforening, GAEB in Vejle, Denmark.
I had the pleasure of giving a series of lectures on algorithms for information search on the internet to the association of Danish librarians. This is a highly educated crowd with a lot of background knowledge, so we could fast-forward through some of the basic stuff.
I am, however, quite happy with the historic overview slide that I created for this occasion:
Here’s an English translation:
What do we expect from the search engine?
What’s searching about?
Which information exists?
What has high quality?
What is relevant for me?
What ought I consume?
Maintain my status, prevent or curate information
Core technology to be explained
Crawlers, keyword search, categories
Reference network topology
Implicit user profiles, nearest neighbour search, cookies
Source of profit
Privacy, filter bubbles
Misinformation, tribalism, bias
Rough overview of the historical development of information search on the internet
Particularly cute is the row about “what is to be explained”. I’ve given talks on “how searching on the internet works” regularly since the 90s. and it’s interesting that the content and focus of what people care about and what people worry about (for various interpretations of “people”) changes so much.
Here is the let 2000s version. How google works (in Danish). Really nice production values. I still have hair. Google is about network topology, finding stuff, and the quality measure is objectively defined by network topology and the Pagerank algorithm.
In 2011, I was instrumental in placing the filter bubble narrative in Danish and Swedish media (Weekendavisen, Svenska dagbladet.) Suddenly it’s about subjective information targeting. I gave a lot of popular science talks about filter bubble. The algorithmic aspect is mainly about clustering.
Today, most attention is about curation and manipulation, and the algorithmic aspect is different again.
I briefly spoke about digital education (digital dannelse), social media, and desinformation, but it’s complicated. A good part of this was informed by Hugo Mercier’s Not Born Yesterday and Jonathan Rauch’s The Constitution of Knowledge, which (I think) get this right.
The bulk of the presentation was based on somewhat tailored versions of my current Algorithms, explained talk, and an introduction to various notions of group fairness in algorithmic fairness, which can be found elsewhere in slightly different form.
There was a lot of room for audience interaction, which was really satisfying.
At a conference in Norway last week I learned new word coined by Tore Burheim, IT Director at Bergen University: overvåkningsomsorg. It’s a variation of Shoshana Zuboff’s brilliant word, overvåkningskapitalism, surveillance capitalism.
There is no perfect translation into English: omsorg is care, so the direct translation would be surveillance care, but that doesn’t quite work. Health care is “taking care of health”. Surveillance care would be “taking care of surveillance”. That’s not what is meant. Overvåkningsomsorg is the use of surveillance for ostensibly benevolent reaons, it is surveillance-for-care: A Huxleyan or Benthamian society where the system puts you under surveillance because it cares for you. Where Zuboff’s dystopia is easy to dislike (who likes surveillance? who actually likes capitalism?), overvåkningsomsorg is a delicious oxymoron.
I propose surveillance paternalism as the English translation, even though it’s not perfect. (“Paternalism” is not as warm as “care”.) Surveillance benevolence? “The age of benevolent surveillance”?) Danish overvågningsomsorg and Swedish övervakningsomsorg, are easy. For German, I propose Überwachungsfürsorge or maybe even Überwachungsbarmherzigkeit or something like Wohlfahrtsüberwachung.
Anyway, great word and great book title. Somebody should write this.
I am delighted to be nominated for the EATCS Council, on which I have served for a full term now.
Looking back at my election statement from 2017, it is dominated by my ambition to increase the quality of ICALP, primarily by giving ICALP a steering committee.
I am happy that I can just write DONE next to this action point, see https://eatcs.org/index.php/committees. The ICALP steering committee turned out to be a complete success. I am extremely proud of what we’ve done with ICALP, including reacting very quickly to the COVID pandemic in 2019–2020. Mostly the success is due to the incredibly competent and pleasant people working inside the committee (ably headed by Anca Muscholl), and the amazing atmosphere that continues to permeate the entire TCS community.
I worry about the publication and review model in the sciences, including in TCS, and have spent a lot of time involved in the advisory board of TheoretiCS, tirelessly headed by Thomas Schwentick. TheoretiCS will be an open access journal of very high quality covering all of TCS; having the journal tightly integrated into the community is a core value, and I serve as the representative of ICALP.
This has been very long process that I am extremely excited about, and I’m proud of what we’ve done so far.
TCS and the sciences
My 2017 election statement includes a passage about “ how the mathematical discipline of theoretical computer science can interact with its surroundings”.
This remains a central question for me, but a much harder one than helping to build steering committees or journals. I had the honour of serving as chair of the Swedish Research Council’s panel for (all of) computer science in 2019, 2020, and 2021, which has solidified by understanding of both CS and the role of TCS inside CS.
The words are not the hardest part, but from years of teaching this material we have learned that words do sometimes invite enormous misunderstandings. You can be led down a garden path by a fancy word that you learned, only to discover that it means something else this week.
Luckily, the terminological issue is relatively easy to remedy by just writing it down and raising awareness. Here are some of the false friends I am aware of. (Do contact me with more examples, in particular ones that actually confused you.)
A stack is a specific abstract data type, see Stack (abstract data type). When we say “stack” in an algorithms course, we almost always mean the data type.
However, the word “stack” is also a specific data structure in the runtime system of an active computer programme, see Call stack, also called execution stack, program stack, control stack, run-time stack, or machine stack, and is often shortened to just “the stack”.
When we talk about recursion and its implementation in our course, some people may say “the stack,” rather than “the call stack” or some more unambigous term. For instance, you may overhear “in your graph exploration programme, you wouldn’t have run out of stack space if you had used a stack.” The first stack is the recursion stack, the other isn’t.
The (call) stack is a stack (abstract data type), but not vice versa.
A heap is a specific data structure that implements the abstract data type priority queue, see Heap (data structure). When we say “heap” in an algorithms course, we almost always mean the data type.
Howver, the word “heap” is also used to refer to a specific chunk of dynamically allocated memory, see Dynamic memory allocation. We almost never use that term in the present course, but you may come across it in relation to data structures in other material, for instance about allocating linked lists and other dynamic data structures.
The (memory) heap is not a heap data structure.
In mathematics, a map is a “function,” a something that connects (“maps”) things from one set to another, see Map (mathematics)). The Danish word is afbildning.
Some programming languages use term to refer to what we call “symbol table” in this course: a something that maps “keys” to “values”, see Map (computer science). Other words are associative array or dictionary. An example is Java’s interface java.util.Map that describes an abstract data type for a symbol table, and is famously implemented in the data structures java.util.HashMap.
However, in other programming languages, map is used for the application of a function to a sequence, see Map (higher order function)) This is true for Python. (Python instead used “dictionary”, dict, for its symbol tables.)
So in Python, map is a higher-order function, in Java, Map it is an interface. These things have nothing to do with each other.
One can try to stick to the term symbol table for the abstract data type.
running time / runtime
Running time is the time to execute an algorithm, synonymous with Time complexity. It is given as a growth rate in terms of a size measure, using some asympototic notation. We can say “the running time of merge sort is O(n log n)”.
We may also use it to refer to the actual execution time (in seconds) of a concrete piece of code. We can then say “the execution time for ex8.cpp was 2.4 seconds on input 002-huge-43.in”.
In contrast, the runtime system is the environment in which a programme is executed, and runtime) as part of a program’s life cycle. We talk about runtime errors if a programme fails during execution, and may want to distinguish between checking array index-out-of-bounds error at runtime or compile-time. Neither of these concepts have to do with running time.
Graph means two things in algorithms and discrete mathematics; they have nothing in common.
Most of the time we mean a combinatorial structure that models a pairwise relation, see Graph (discrete mathematics)) In this context, “graph” is a synonym of “network”.
It may also mean graph of a function, which is what you have learned in school. Such a graph is either a set of points or the drawing of these points. This is what we mean when we say “graph of a function,” or “draw the running time as a graph.”
“Random” means two different things: stochastic (as in throwing dice or drawing lots) or unpredictable (as in arbitrary). These are very different and your instructor will be very cross with you if should mix them up.
When, for instance, we pick the split element in quicksort randomly, we mean “uniformly at random and independently from previous random choices.” It’s very important that you don’t misread this as “arbitrary”: if the split element in quicksort is picked arbitrarily (say, by your worst enemy), the analysis breaks down and quicksort becomes slowsort.
Of course, when we talk about the very important random access model of computation (RAM), the word “random” means arbitrary/unconstrained/unpredictable. Here, it is important that accessing memory location M[i] takes constant time, no matter how i was chosen. Even your worst enemy wouldn’t be able to make the RAM spend more than constant time. So: the word random does not mean that the constant-time bound holds only for random-in-the-stochastic sense access, or (God forbid) that the result of M[i] is random or even arbitrary.
I believe that “random” only means “decidedly nonrandom” when we’re talking about the random access machine, so you’re probably safe in interpreting it as “taken from a probability distribution, probably uniformly and independently unless something else is said” in all other contexts.
A node is a localised swelling in a network, it is a synonym of “knot,” both from the Latin nodus. Think of a fisherman’s net.
In an algorithms course, node means “basic unit of a data structure”, see Node computer science). It often appears explicitly in an implementation, typically as an object of class Node, and sometimes only implictly as a useful abstraction.
We will also use node for the fundamental unit of a graph, see Vertex). We will try to stick to “vertex” for such nodes, but the terms are synonyms and you will meet node-as-vertex in many places. Another good word for node-as-vertex is point.
Perhaps confusingly, when you construct the data structure to solve a basic graph problem, typically you don’t use a node (data structure) for a node (combinatorial object).
Linear means at least two, completely unrelated things. In “linear programming”, the word is used as in calculus (linear function), to highlight the fact that the variable has degree 1 (rather than appearing squared.)
In “linear probing,” a collosion strategy employed by hash tables, it just means “one-after-the-other”.
In “introduction to programming,” or “computer programming” or “Java programming,” the word programming means “writing instructions”, typically using a programming language. In “linear programming,” “quadratic programming,” or “dynamic programming,” the word programming just means planning or scheduling, as in “TV programming” or “theatre programming,” and could be usefully replaced with scheduling or optimisation almost everywhere it occurs. As a particular confusion, constructing a linear program (or “programme” if you’re fancy), means setting up a bunch of constraints (these constraints are linear functions), but does not involve writing the (computer program) to solve those constraints. You can write a Java program to solve a linear program.
Dynamic means “the entries or instance can change” or “the entries are never recomputed”. In the former sense, it is used in dynamic array (an array whose length can change – all arrays are mutable, so in that sense they’re all dynamic) or dynamic graph algorithm (an algorithm for a graph that undergoes changes.) In the latter sense it is used in dynamic programming, a specific algorithmic optimisation strategy, where the whole points was that functions without side effects never change, so we might as well store them is a static table and never change them.
The history of the term “dynamic programming” is documented by Bellman and quite entertaining.
Asymptotic means “never falling together”, and the term draws attention to the fact that one function is the asymptote of another by virtue of the two never touching. This is typically a topic of high school maths, see asymptote. In the analysis of algorithms, “asymptotic” almost always just means “for large values of the input size” and has nothing to do with “not touching” or even approaching each other. In particular, a function is asymptotically itself, and 2 is 1 in asymptotic notation; see Big O notation. There is an intermediate concept of asymptotic analysis which somewhat explains how the two terms related.
For immediate release. To celebrate the publication of Cynical Theories in the middle of the exciting cultural and academic climate of 2020, Helen Pluckrose, James Lindsay, Herbert Marcuse, and Max Horkheimer are now playable characters in Secret Sartre, the social deduction game of academic intrigue and ideological infighting.
“The intellectual corruption of sense-making institutions, in particular academia, that we have been witnessing in the past few decades is increasingly credited to the post-marxist ideas of Critical Theory in the Frankfurt school, rather than only the traditions of French postmodernism. Players of Secret Sartre have expressed their desire to acknowledge this perceptual shift – to allow different voices to be heard in the game, to open the shared experience to a different narrative – and Max Horkheimer has long been a favourite request among both devoted and casual players,” says game designer Thore Husfeldt.
“To complete the Critical Theory faction, we’ve added Herbert Marcuse, father of the New Left, and another central Frankfurt school thinker. Through his advisor Heidegger, he also connects Secret Sartre to that other large totalitarian tradition of the 20th century, Fascism. This also allows us to acknowledge our ludic, ideological, and intellectual debt to the original game.”
To balance the new expansion offering, both Helen Pluckrose and James Lindsay join the quixotic Science faction, championing values such as truth, humanism, evidence, conversation, honesty, and enlightenment. Pluckrose and Lindsay are the authors of Cynical Theories(Pitchstone, 2020) became famous through the Grievance studies affair.
The current expansion to the original 2015 game follows a previous expansion from 2017, which added Jordan B. Peterson and Bret Weinstein to the forces of the Enlightenment.
The game expansion is available as a free download immediately. To unlock Horkheimer for actual play in Secret Sartre requires purchase of Cynical Theories in hardcover or a sizeable donation to New Discourses.