Wordplay beta is live! We’d love your feedback on everything (language design, APIs, tools, editors), and your contributions, if you’re so inclined.
Fun indeed! I started the tutorial and put myself into "ignorant mode", trying not to anticipate anything. That got me stuck at the point shown in the attached picture. Why is the string shown in red and in double quotes on the stage, if it's blue and in single quotes in the source editor and in the explanatory text?
Also: "They evaluate to 'hello'" isn't very useful as an explanation to me. First of all, "evaluate" is a transitive verb in non-technical English. But even if I accept its CS usage, the sentence seems to claim that all Text evaluates to "hello".
Talking about the tutorial, I missed most of it because I hit 15/15 and thought that was it 🙂 I was thinking that didn't provide much of a tour.
I'm going through the full tour now. Maybe showing x / y of the chapters, or showing the outline pinned on the left would help? I know it could be confusing with multiple x / y numbers.
Going through the tour, I find the basketball bouncing the most interesting example because of all the Motion and mass, etc. how it implicitly not only animated, but bounces. I would use that example often to show the power underneath the language.
Thanks for the feedback! We’ll think about ways to make the full scope of the tutorial clearer, without adding clutter.
Hey everyone -- long-time lurker, first time poster. Thrilled to share the open beta launch of MightyMeld, a visual dev tool for React I've been building with some friends of mine. mightymeld.com
Just posted this blog post telling the story and why I'm excited -- mightymeld.com/blog/open-beta
Would love to hear your thoughts! I'm a former academic who's been driven into industry out of a desire to make the theoretical practical. MightyMeld is a "visual coding" that is not like any kind of "visual coding" i would've imagined if we hadn't've built this thing starting from real production code and saying "how can we make this more visual".
Hiya, if you're interested in:
Then you clearly need to read my latest fine work:
duncancragg.substack.com/p/from-app-trap-to-freedom-space
TL;DR: we can break free of the "app trap" by simply building an OS without apps !
(Don't forget to subscribe so you won't miss future updates, right into your mailbox...)
I'll kick off the conversation thread.
Let me know what you think, either here or in the article comments.
It's aimed at non-techies, so don't pull me up on minor technical issues! One thing I should say: I've had to use the word "object", because it's what people will understand, and there's no other word in general use for what they are. But they're not OO objects, they're data chunks that are "internally animated", so the inverse of OO objects which are method-wrapped data.
Yes! Smashing up apps and data-silos to concept based self-attaching nodes, is the way. I like the excel-like logic basis, allowing introspection and mouldability. I've been thinking up a similar track... could you make it even higher level? Where the data isn't just random binary/half-labeled spreadsheet cells, but actually modeled/given meaning through canonocalized constraints/relations? Possibly opening up for not just modular components, but fully dynamic interactions self-assembling from slices of information?
Also, seen urbit? The focus on local first sharable personal data-owning, in introspectable non-blob format, with separate logic/functionality layer, and yet separate render/ui layer...
Love the concept of semi-physical rooms! A place for chat, a place to hang, a place to research! All designable by you, the end-user, through components you've found on your travels through the lands! YES!
When you link the presence sensor to the lamp, how does it know? Sane defaults akin to node n noodles connections? What about wanting a dimmer? Delayed turn-off? You make those as connection boxes (node n noodles or sheets) in between? Fun indeed!
Wow, thanks, a torrent of points .. working backwards:
Yes, a lamp knows that if it's linked ("wired") to a presence sensor it does one behaviour, if linked to a switch, another, if a dimmer, another. So probably hardcoded behaviours cos they're kinda obvious. Then user-written rules for the less obvious ones, or custom stuff like the delays.
You could have nodes-and-wires style, or Unix-pipes style, if you like, but it's not essential. Whatever is most expressive.
Yes, picking up stuff (by grabbing their links) as you travel, and then reusing them back home is certainly what I have in mind!
I knew about Urbit a while back, but like many, I was put off by the politics! However, I've since become non-political, or maybe something more like an anarchist! So yes, I'll go back and see how far they've come over the last four years or so.
Not sure what you mean in the first comment, which is why I'm replying in reverse order!
... modeled/given meaning through canonocalized constraints/relations? Possibly opening up for not just modular components, but fully dynamic interactions self-assembling from slices of information
example?
Nice! The concept example: A specific person might be a node in the infinite knowledge graph. Putting it in the context of a list, in the context of a 2d surface of certain size and viewing distance, may result in contact cards; put them in a 3d wallet may show as "physical" business cards, sending one to the full page printer may print a CV... Problem is... many permutations... hard-coding behaviors not scalable. Need way to dynamically adapt for each situation (combination of data slices and the context), with seamless ways of guiding it closer the the desired intention. For this, the computer needs to be included in the work, and given the relevant knowledge and agency to handle it.
Many AI solutions still suffer the "better horses" problem, the end-user needs to be empowered with a language to communicate their desires. Even reading the mind might not be enough, as the right language gives you the mental clarity in the first place. AI solutions could still guess well, and gather own experience... but if we want to include the human... Even without, the AIs themselves need inter-communication and clarity of thought. I'm proposing work towards a system facilitating such. A system where the current grunt-work of software development isn't applicable anymore, because of different abstraction levels and perspective. A system where the freedom and flow of creation and collaboration is found at yet a higher level.
So, firstly, yes, you'd have slightly different render of a given standard object type depending on context, whether in a list or embedded in another object, or pulled out on its own into the space. However, there's no need to do a different render for 2D and 3D: it'll be the same one. I'm thinking neuomorphic in both. (It's using Vulkan for both, underneath.)
You wouldn't get a CV if you printed it, no, you'd have to jump the link from the contact card to the person's CV. If you were viewing the contact in 2D, then expanded the embedded link to the CV - seen as a smaller panel inside the contact, then hit "print", you'd get what you saw, with the CV.
So yes, there's both some hard coding of behaviours of objects, and a default render of any standard recognised object type. But you can extend both.
You suggest AI help for programming? And working at a higher conceptual level? I've probably over-summarised your point, so apologies there. So, I'm not working on AI or AI-to-human, etc., conceptual languages, except the programming language is declarative so intended to be humane, unlike imperative ones. It's what I call a "Domain and Target Independent Language" that is meant to be general purpose and conceptually intuitive without being Nat Lang.
Nice! Same, AI not in core, but, possibly similarly more power using the core/lang as the human is. Similarly, if core facilitates human-human collab, it may also similarly facilitate human-ai or ai-ai collab. Side note to show how it has fundamental long-term value in the post-gpt era.
Exactly! If the contact card needs a link to the CV, where is the data!? Eg. the name, and profile picture? Are they both linked in each, or duplicated? If linked to the person entity/node, RDF style or similar; then can the CV not be a virtual construct from available info in the graph, with possibility to override the heuristics on a case by case basis if needed, or even better, update the heuristics to facilitate improved CV's all over?
Eg. obsidian style, is the card and the cv separate but linked .md files, with info duplicated; or is each property atomic, and related rather than linked, such that a generic CV view could be shown?
Nice! Same, AI not in core, but, possibly similarly more power using the core/lang as the human is. Similarly, if core facilitates human-human collab, it may also similarly facilitate human-ai or ai-ai collab. Side note to show how it has fundamental long-term value in the post-gpt era.
Yes, any human-cognition-aligned notation for data and rules will also be AI-aligned, one would hope!
Exactly! If the contact card needs a link to the CV, where is the data!? Eg. the name, and profile picture? Are they both linked in each, or duplicated? If linked to the person entity/node, RDF style or similar; then can the CV not be a virtual construct from available info in the graph, with possibility to override the heuristics on a case by case basis if needed, or even better, update the heuristics to facilitate improved CV's all over?
Duplication is frowned upon: it's all links. So yes, you'd build your CV from bits all over the place, a photo from here, a company logo and blurb from the company's collections, etc.
Not sure what you mean by "heuristics", though?
Eg. obsidian style, is the card and the cv separate but linked .md files, with info duplicated; or is each property atomic, and related rather than linked, such that a generic CV view could be shown? Auto adjusting with time?
Card and CV separate and linked, yes, of course. No dupes. Not sure what you mean with the "atomic", "related", "generic", "auto adjusting" bits though!?!?
In essence, does those links bear explicit meaning, or is it tacit knowledge unavailable to the computer? Having done the CV once, is it generalized for all? Difference between compounding growth, or fostering an environment where monotonically rebuilding what's already done is kept the status quo.
Each link is a unique string that is essentially opaque or semantic-free from the HX (human experience!), but can still be used to tunnel location stuff by the P2P layer.
The CV type would be an aggregate of smaller types - it's a history so that's a list of calendar event types, for example. So the OS front end could either rely on the render of those elemental lists and events, or it could recognise the super-type of "CV" to render something more CV-aware.
I had another look at Urbit, and it's interesting, but the "single function OS" over a persisted event log type of thing isn't really my kinda stuff. It could end up being similar to the Object Net and Onex from an end-user PoV, but I could find nothing on the site about why anyone would use it, what it would look like in the HX, etc. And, as soon as I see the "B" word (b..b...bb..blockchain!) I usually hit the quit.
"[...] over a persisted event log type [...] HX [...] B-word" yes, same! Reference from the aspect of user-owned data and that different "apps"/views can overlappingly utilize data. Projects get one or a few aspects right, but have yet to see someone put it all together! Enjoyed your writeup, checks many boxes! Would love to play around in that world 🙂 Just the customizable "chat-room" is gold.
This perspective on data and how to present and manipulate it should be foundational.
WILL be foundational at some point.
But: note the corpses behind you: OpenDoc, OLE, …
I write about a suitable distributed data and security model for this sort of project at: frest.substack.com
@Guyren Howe "[...] WILL be foundational at some point [...]" ❤ Let's bring it there! And yeah, oh yes. I've been cursed for the last 10yrs. Going full in once more. As long as you keep going you're not dead!
The way to build this now is with suitable services (local or remote) and an app to access them.
The foundational data presentation should be values in namespaces and relations. APIs should be functional and relational.
📝 FREST Substack | Guyren Howe | Substack
The substack for FREST: computing for everyone. Click to read FREST Substack, by Guyren Howe, a Substack publication. Launched 9 months ago.
The UI should be somewhat like Access or FileMaker, giving the user the direct ability to manipulate the relational data storage. Users can quickly become quite adept at manipulating relations.
My FREST project is in part about making that distributed. I should be able to “join” multiple, distributed data sources and functions in an ad hoc manner.
TL;DR: we can break free of the "app trap" by simply building an OS without apps !
My main criticism of this claim is the word "simply". My second criticism is "simply building": that's only the first step, after that there is a long battle for adoption, trying to convince people to use an OS that won't let them access their photo collection stored at Google.
But yes, I do want to see this happen!
@Mike Austin Hi I see you put a thumbs down on the post above where I said "But did you subscribe?" Wassup? 😧
Konrad Hinsen Yes, it's conceptually "simple" to have an OS without apps, but "building" will indeed take from now until my last breath... All our data will be out there still, so as long as you have an API to those photos on Google, you'll be up and running! Things like email and the static web, and many chat applications, can be brought in to the Object Network. Anything that allows data to be grabbed (and shoved!) - via a protocol or a filesystem. I could even (reluctantly!) allow for old fashioned apps to be rendered in-world, starting with (and maybe ending with!) a browser.
But I'm currently seeing the project as a "Lab" for Proof-of-Concept work, where I don't need to make any compromises such as that. I recently let go of any hope that anyone would be using it any time soon, to be honest! But things like blockchain stuff and Urbit, even though I'm not a big fan of either, do give me hope that completely new things can gain traction.
Duncan Cragg Oh oh! You are actively full in building this now?! Outline of steps/areas of focus + some timeline available?!
I've been going back and forth between different entry points on the behemoth of foc... text language, logic engine, data engine, graphics engine, networking component... slowly realizing that working consistently productively alone is the first one to tackle, so going back to basics making quick playtoys to get the doing muscle warmed up again; ie. would love to get something like endless paper (infinite zoom/pan canvas) going (rust+wgpu) + published, then MMO functionality added, text rendering, dom-element attachments, obsidian integration, rust+gpu fn node n noodle graph playground (just take monaco editor as dom attachment) and we're of to the races.
I've been building "it" (for some always-changing value of "it") for decades! I'm able to work on it more these days, but timelines would be foolhardy given I've got random things coming up all the time!
I'm currently busy on the 3D HX ("Human Experience", UI, UX), porting a 2D UI from a smartwatch version of the OS I made recently. This allows you to create, edit and link objects. That's fixing the "Balkanisation" "app trap" - so you can mash up lots of little objects of diverse types, any way you like.
Then next I'll continue on the P2P protocol I started, focusing on cryptography stuff and proxy, cache and router functions. That's fixing the "Big Tech" "app trap", of course!
Finally, I'll get back to porting my programming language from Java to C! So fixing the "Blob" "app trap" with EUP.
So yeah, far too much for just me between now and my final gasp! I'll stay alive as long as I can.
actually I think I've found you on GitHub, so I'll potter around in there for a bit...
Do you have a link to anything you were talking about above? The stuff on GitHub seems, quiet...
This reads quite similar to what Alex Obenauer is doing with WonderOS - alexanderobenauer.com
In my current work, I’m exploring new and renewed ideas for how personal computing can better serve people’s lives — expanding opportunity, agency, curiosity, and creativity.
To me both seem quite similar to the original Smalltalk approach - there is no data, just objects that represent computational processes
My work has aspects of Smalltalk about it. You could see it as a distributed Smalltalk, although there are some differences (I have a novel distributed security/composition model, for example).
Yes, the "de-Balkanisation" part of the Object Net is very similar to the "itemised" idea - I even thought of calling stuff "items" instead of "objects", but that would actually be just as confusing to techies who know about all this, and a bit less clear to non-techies. My "objects" don't have anything to do with the "objects" of OO - they're very much about first class visible data! And I don't believe @Alexander Obenauer’s "items" are anything like "objects" either, actually.
He's involved in Tana, which also says about it being an "Everything OS " on the site, perhaps hinting at de-Balkanisation - allowing "items" of diverse types to be mashed up however you like, rather than being imprisoned by a dominant app for each type.
I borrowed the idea of a "Lab" from him as it's a nice framing, that frees up the mind!
Not sure if he has anything around decentralisation or declarative programming languages, but I don't remember anything like that, it seems to be mostly focused the UX of managing local items. I'll go and have a look to remind myself - hold on ...
Hey, Ivan Reese - isn't @Alexander Obenauer your work colleague at Ink-n-Switch, now? Why isn't he on this site these days? We could use his input in this thread to prevent my misrepresenting his ideas!
Right, AO does have some stuff on networking as pub-sub and event-action "automations":
alexanderobenauer.com/labnotes/021
alexanderobenauer.com/labnotes/025
alexanderobenauer.com/labnotes/026
alexanderobenauer.com/labnotes/027
They seem to sprout organically from the itemised (local-first) ideas, rather than being core to the project - taking up a small percentage of the Lab Notes output.
So far, we've considered an OS of the future that has these core pieces: Items, Views, Services, and Actions. With these pieces in place, user-defined automation becomes straightforward and immensely powerful: we can remix and reuse...
So far, we've explored the idea of an itemized OS a good bit in these lab notes. But a huge part of personal computing today happens beyond your local personal computing domain. Let's start moving towards the internet: What might the internet look like when you introduce items?
In the last lab note, we explored publishing items. But in that exploration, these items were mostly static: people could see an item's current state, and subscribe to any changes made in the future. Today, let's explore publishing items with behavior.
📝 LN 027: Personal Computing Network & Devices
What if we could make software modules for our own personal computing network? And what if we could add various hardware devices to our personal computing network to gain additional functionality?
I think there is an aspect to both your work and AO that are similar - a reclaiming of a person's computational media/output in a way that can be flexibly molded to the goals/task at hand. I feel the underlying technical aspects are more a distraction from these ideas.
I recommend this demo of a reconstituted 45 yr old Smalltalk-78 system, which seems to match both your goals - youtu.be/AnrlSqtpOkw?si=zLPeunoJj_4V0ose
Smalltalk-78 + modern networking + color support sounds a lot like what you both describe
Ivan Reese maybe I should be more explicit - as your work buddy, could you ask him to drop by here some times? 😄
What's striking is:
Here's the paper describing the process of revival: esug.org/data/ESUG2014/IWST/Papers/iwst2014_Reviving%20Smalltalk-78.pdf
What's most shocking is that they took a photo of the tape dumping ground with the tape visible on which that Smalltalk was found! Wow. Reminds me of the gut-wrenching thought of the BBC re-using the only tapes that original episodes of Dr Who were recorded on
Consider a REST API. You can do an OPTIONS request to any URL, and get back the URLs of the types of the thing.
At the URLs of the types can be found the URLs of the endpoints that can take a value of this type (or a URL of the value, depending) as one or more named arguments.
Also at the type can be found the addresses for Mustache templates to display the value in a variety of standard ways, including:
The value can be requested from the URL in JSON, and then fed into any of the Mustache templates.
The place where this value is displayed can also offer the operations on the value.
This is the simple foundation for a component web.
So yes, the Smalltalk environment avoids separation of OS and apps - everything is mashable. But .. it unfortunately went the Imperative way, not the Declarative way. I remember back in the 80s being really excited when I discovered Smalltalk (I think it may have been via Byte magazine - the one with the hot air balloon) - but then being absolutely dismayed to discover that message dispatch passes the flow of control into the target object ! There was a thread that followed the message into the target. It seemed like such a betrayal of a potentially brilliant idea.
Found the Byte magazine: archive.org/details/byte-magazine-1981-08/page/n79/mode/2up and I'm having an emotional reminisce... :face_holding_back_tears:
Features p.14 introducing the Smalltalk-80 System [author Adele Goldberg] A readers' guide to the Smalltalk articles in this issue. p.36 The Smalltalk-80...
I learned at least as much from Byte magazine as from my Computer Science major.
Everyone on FoC should be involved in this thread, to get an understanding of how we are where we are, and how we could have had so much more.
You can understand why Ted Nelson and Alan Kay are so grumpy about things
The thing I’m grumpiest about is losing the enormous potential of the relational model to the tragically awful SQL.
That's a really good "another example" of the same thing happening, yes
@Guyren Howe I don't agree about the "component web" model you suggest. But that's another thread probably...
After poking around in Urbit for a while being utterly baffled why anyone would give it even a cursory evaluation, let alone having such an apparently vibrant community - is it a ponzi scheme? what on earth do people do with it? where does the money come from for those fancy meetings? - here's an article that also has .. questions:
📝 Urbit :: the good, the bad, and the insane
In this post I’m gonna be making all kinds of fun of Urbit. And all that after spending just a few hours poking around it.
All I can think is that it's a nutcase project given a huge lift-off when they decided to use Etherium - it simply seems to be worse than vapourware: software that exists but may as well be vapour.
Duncan Cragg you should read Alan Kay's Early History of Smalltalk published in the early 90s - worrydream.com/EarlyHistoryOfSmalltalk. I get the sense that he was dismayed by the version of Smalltalk that came out of Xerox (I believe he had gone to Atari by then), and even more dismayed that nobody used it for what he thought it was really good for (to build the next version of a computing system).
Also it helps to remember that Smalltalk in the early 70s was built on today's equivalent of a $150k machine (I've seen figures that the original cost of the Alto was about $20k dollars in 1972)
Duncan Cragg Being imperative is an easily criticized feature of Smalltalk. But at the UI level, that's what users expect. Tasks like "add a new entry to my agenda" are imperative. And since I'd expect to be able to script everything I do by hand, I also expect to have an imperative scripting language.
What I see as the mistake in the Smalltalk approach (and many others) is the idea of a single universal computational paradigm that applies from integer addition and string manipulation to user-level interactions.
Konrad Hinsen Why do you see that as a mistake? It's using the strongest possible model for everything.
That's assuming that there is a single "strongest" (or "best" in some other sense) model for everything. Which I doubt.
Konrad Hinsen you'll not be surprised to hear that I disagree! And I do think that declarative is the "model for everything" that works from low-level data transformations up to user interactions. I think Nat Lang's imperative constructs actually cause confusion here. When I say "mow the lawn!" in an imperative holler, what I'm actually doing is expressing an intention for a goal state. And I won't be satisfied until I see that state to my expectation. I have a corny phrase from back in my REST days: "intention puts the system in tension". I know, a bit cringy. But while the state isn't manifest, that tension remains, and can be communicated further: "haven't you done that bloody lawn yet??"
Declarative has been described as "say what not how" - and this is about "what" state you want, not a prescription of "how" it should be achieved
I believe an HX (human experience, UX) built around goals and intended states is, well, better. Worse of all is where you get the two mixed up: is that button telling me the current state, or the state I'll get when I hit it?
Hello everyone I've been making a new tool called arroost. It's for making scrappy music. Please do make something in it and send it to me.
I will post some examples of its use in the replies
I'm also enjoying people mess around with it. Here @Elliot recreates a noisy office
Please let me know what you think! And if you make any scrappy things, please share them with me :)
I'd love to see whatever you make! to help my development process!
.
I love this, it's a very interesting way to represent a sequencer, and you can create pattern that are as complex as the kinds you'd do with a DAW, but they don't suffer from the awkward "grid-ness" of traditional sequencers.
nothing happened when I tried it: I got pluses and circles and wires between them. but I don't know what I have to do .. couldn't get any sounds
Duncan Cragg It started becoming a bit clearer for me when I realised that if you make one of the circles go red, it's recording.
before I try arroost, here are my first impressions of Olive Amphibian.mp4:
FYI - what I think I learned about songwriting:... any artform - songs, movies, books, etc. - needs to create tension then (maybe) to resolve the tension. Like playing a Dsus4 chord followed by a D chord.
You can create tension on many levels, e.g. melody, chords, lyrics, phrasing, song structure (AABB vs. ABBA vs. ...), line lengths, numbers of bars, rhyming, etc., etc.
A “great song” (a “great work of art”) creates tension and release on many levels. You can keep coming back to it and hear new things every time. Not just in repeated listens, but on a time-scale of decades. Simplicity is “lack of nuance”, hence, complexity “contains nuance”. Great works of art appear to be simple, but, have subtle nuance that can be gleaned by repeated study - i.e. layered. Average art is either simple or nuanced, but, not both. Flat, not layered.
From a lyric perspective, perfect rhymes imply perfect balance, “closure” and “happiness”. Imperfect rhymes imply imbalance (and, sadness, confusion, longing, etc.). For perfect rhymes, I use rhymezone.com, for imperfect rhymes I use b-rhymes.com. My goto is b-rhymes.com. It gives me 100 possibilities, and causes my mind to wander (“brainstorming”). The music that you’ve created with arroost is “unstable”, and, IMO, perfect rhymes don’t fit in with that theme.
[FYI - example. I always loved McCartney’s song “Yesterday”. I tried for decades, but, could never play it and sing it. Then someone (Pat Pattison) pointed out to me that the verse has only 7 bars, not 8. My Engineering mind tried to force singing the song as if it had 8 bars. 7 bars makes the song sound “sadder” and prevents Engineers, like me, from cloning it.]
@Alex McLean That's AMAZING! Thanks for playing! Could I ask what browser and OS you're using? as I notice some visual bugs that I'd like to fix (although I'm also enjoying them here)
@Xavier Lambein Thank you! I'm very pleased that's what you notice. I'm very inspired by tldraw's sloppiness freeing people to draw, and sandspiel's sloppiness freeing people to paint. I wanted to try to extend that here
I literally have no idea how this works. I've tried everything. I tried an incognito window and turning off three plugins
Duncan Cragg do you have a microphone attached to (or part of) your computer?
Aha! I think that's broken actually, so .. it's recording sounds I make? I can try a headset, if I can find one..
right, headset in, wait for red circle, made a cough noise, and it seems to replay it sometimes but, .. I still don't get it. is there some cultural reference you all have that I'm missing here?
which is a form of disability in the tech world - you're not being discriminatory or exclusionary are you???? 😄
I know white male cis educated wealthy is the lowest of the low, but I should still be allowed into the Funky Soundz Club
is there some cultural reference
Not that I'm aware of. It's just a fun toy for playing with sound and thinking about time. If you give it a bit of patience and make a few attempts, you'll arrive at something that ~at least~ passes the John Cage bar of "music", if not something that my 70-yo parents or 4-yo daughter might recognize as music.
I'd be happy with the kind of soundz my 61-year old brother makes, tbh
Alex and I were able to get fancier sounds out of this because we're both practiced computer musicians, so we have some extra tools at our disposal. But the sort of thing Elliot made is closer to (what I see as) the spirit of the thing — playing with whatever sound you have handy in a silly new way.
Paul Tarvydas here's a nice post about how to give humane, constructive feedback club.tidalcycles.org/t/some-thoughts-on-sharing-and-constructive-feedback/499
📝 Some thoughts on sharing and constructive feedback
I wanted to come up with some guidelines to support us all to give each other constructive feedback in a positive way. I hope this will encourage people to share their practice exercises and what they are working on, even if they think their work is not perfect or finished. One really great aspect of IRL workshops is peer sharing - people chatting with their neighbour, solving problems together, and building each other’s confidence. This is a tricky thing to get right online but let’s try! (Plus...
Duncan Cragg Yes, you are welcome with open arms to the funky sounds club :) The barrier to entry is very low, you just need to make something and share it
I really want to, more than I've wanted to do anything inspired by this forum. I want to show my brother that i can make funky soundz just like him
I actually think he'd really like it, but I need to have something to show
Christ, you've popped far too many neurons in my soggy brain trying to follow that. HELPPP! Right .. it's nearly 1am here, I'm going to go and sleep on it and try again tomorrow
but do go and listen to chintzbaby.com - I should promote my bro's work cos it's actually quite good to my ears.
Heh the truth is I couldn't get my microphone running, so pipewired in the audio from the funky drummer sample from a YouTube tab. You can't really go wrong with wonky breakbeat slicing. I also had fun wiring arroost's audio back into itself for a feedback party
👆 prime example of packing 5 alien cultural refs into just two lines on the page... I'm off to bed. 🤣
It's 1am here too,I can explain in the morning. Anyway it is a lot of fun, it feels like a strange puzzle with a freaky sequencer as a nice reward for working it out. Although I still don't fully understand it..
I'm sure there are some bugs with the microphone, but if that works, you should hopefully have something to play with :)
the puzzle side of it is a huge part of the design. it's intentionally difficult! and it's intended to be provocative, to illicit positive (and negative) reactions. Thank you all for participating so far, this is way better than I imagined :)
Lu Wilson my microphone problem is a hardware or o/s issue, some days it just doesn't work
Duncan Cragg pipewire is excellent linux feature which transformed it's previous mess of audio stack into something fantastic.. it lets you easily plug one app into another. Third party equivalents on mac are blackhole and soundflower, on windows I think there is something called virtual cables.
Funky Drummer is a heavily sampled track by James Brown whosampled.com/James-Brown/Funky-Drummer
You can listen to a famous breakbeat from it full of James Brown grunts here youtube.com/watch?v=GACNpJfzyjs
A 'break beat' is basically a sample taken from a drum solo in a classic track like this.. Many musical genres and dance music subcultures have been built on them.
Breakbeat slicing is where you cut up a breakbeat and reassemble it in a different order. Done a lot in e.g. drum n bass. Here's a code example breaking one into eight pieces and playing them in a different order, with pattern transformations:
By feedback I meant instead of using external audio input, I recorded audio from the audio that arroost was already playing.. This got noisy quickly. Whole musical genres are based on this technique too..
🎥 10 Hours of the Funky Drummer breakbeat (James Brown)
Strudel is a music live coding environment for the browser, porting the TidalCycles pattern language to JavaScript.
(the strudel example uses part of the amen break than Ivan Reese mentioned earlier, which is probably the most sampled breakbeat ever)
@Alex McLean thanks for the break down! Every time I see strudel I love it. need to explore it more. hope to meet you at a London live coding thing one day soon!
Lu Wilson that'd be great ! I saw london meetups are starting up again, I'm up in Sheffield but will have to make it down soon
I can now read your post, @Alex McLean and understand it like a true pro! Thanks so much for de-obfuscating!
I've been listening to lots of Amen stuff, also. Like it
Paul Tarvydas I worked out which video you were referring to. I think you can get more out of it by listening for polymeter. There's a ten beat loop against a seven beat loop which generates really interesting shifting rhythms if you listen for it
For anyone who's interested, I outlined one of the main philosophies behind arroost in my weekly update last week: todepond.com/pondcast/demo
Feel free to listen (or read the transcript) without paying for this one!