What is the music on the UK Depop TV advert December 2023?

Demonic by Gameific

https://open.spotify.com/track/5AmGb3rUTZyqTD1VS6NGnE

Surprisingly this was not on the Depop YouTube channel so thanks to Shazam for finding it.

The music has a somewhat sinister but catchy Electronic Dance Music feel with ethnic instrument sounds and lyrics in a non-English language (with English words like ‘supersonic’ and ‘peace out’) that I presume is Hindi or a dialect given the artist is from Mumbai.

The throat clearing near the start gives a live or less highly produced fresh feel and the musical scale conveys an edginess to the brand, which is in the ‘circular’ recycled fashion sector, and invites a more diverse audience than some of its competitors.

How the floating “Slickback” levitating trick works step by step

This tutorial is now available as a thirty second short video with frame-by-frame analysis of how the left and right foot moves to achieve the levitation effect.

The video shows how attention is drawn to the leading front foot which appears to hover in the air and how the trailing back foot is the one that in fact does the work to maintain the highly effective illusion of walking on air.

Since October 2023, TikTok users have been captivated by the dance video posted by Korean student Lee Hyo-cheol, where he seemingly glides magically off the ground.

The so-called Slickback dance move, named after the song “A Pimp Named Slickback” it is set to, now has several videos devoted to it racking up over 2 billion views.

Lee moves in a way that makes it seem like he is able to step so hard on the air that he can get purchase on it and float above the ground.

Lee modified a dance move created by YouTuber and TikTokker Jubi2fye in early 2022. The “Jubi Slide”, as it is known, looks somewhat like a sideways moonwalk.

The new move focuses more on the striking out of the heels than the sliding, “moonwalk” movement and traverses some distance on the ground rather than being confined to a single spot.

Links to Lee’s original TikTok @wm87.4 account video and a “tutorial” by @gkocoach are available at:

https://cnalifestyle.channelnewsasia.com/trending/slickback-jubi-slide-tiktok-dance-korean-student-floating-feet-376886

What is qkv in LLM transformers? What does it do? How does it work?

There are several great explanations of how the encoder and decoder transformer described in the paper Attention Is All You Need works, e.g. The Transformer Model.

A fundamental concept is attention i.e. the adding of context and meaning to individual words by considering each individual word against each of the other words surrounding it.

For example, if a sentence contains the word ‘bank’ then the presence of ‘money’ in the same sentence suggests that means a financial institution rather than a river bank.

Attention is implemented in code using query (q), key (k) and value (v) vectors and there are some analogies like this one, which regards the key/value/query concept as similar to retrieval systems. For example, “when you search for videos on Youtube, the search engine will map your query (text in the search bar) against a set of keys (video title, description, etc.) associated with candidate videos in their database, then present you the best matched videos (values).”

This analogy raises as many questions as it answers however

  • What is the query? – the whole of the text we provide to the AI interface (ChatGPT,  Bard, etc) or just a single word?
  • How is the query stored in a matrix?
  • What kind of key is the query matched against?
  • How does the key relate to the value?
  • Is the value something we get as an output from the process or something the model already possesses from previous training?
  • What is the point of the value we get from this whole process? How does it relate to generating a response to the text we enter into things like ChatGPT or Bard?

To answer these questions, I found this video which provides a great practical explanation of the model that is simultaneously proven as demonstrably correct through being implemented as working code with a small scale test case that can be trained and run in minutes on the free Google Colab platform.

Karparthy (1:04:15, 1:08:00) describes the query (q) as “what am I looking for” (“I” being a single token from the input sentence) and the key (k) as “what do I contain” so the dot product q.k (where k is all the keys of all the tokens in the input sentence) becomes the affinity between the tokens of the input. Where a token’s query vector aligns with the key vector of another token, the token ‘learns’ more about it (aggregates its feature information into its position).

The value (v) is the “thing that gets aggregated for the purpose of the particular head of attention” that the q, k and v matrices form. Ultimately, the purpose of value is to appropriately weight the token affinities (q.k) so that the product q.k.v is able to sufficiently distinguish token sequences and hence allow the most appropriate next word to be predicted (by the very last ‘softmax’ component of the decoder).

To put all this into more pithy and understandable terms:

  • There is a query, a key and a value matrix for each ‘head’ of attention, i.e.  way of characterising the relationship between words (e.g. relations between tokens, relations between pairs of tokens, relations between groups of 4 tokens, etc.)
  • Q contains the ‘word mix’ (more accurately token mix) from our input text (a) at a particular word position in the text and (b) constrained to a fixed number (e.g. 4, as “hard-coded” for our particular LLM implementation) of sequential words e.g. 4 words from our input text “I like learning about artificial intelligence” at position 1 would be “I like learning about”.
  • K contains the features that this same set of words has – one feature might be e.g. “is a doing word”
  • Q.K gives us a representation of the meaning of the input word mix by aggregating the features each input word has (K) against the features each input word is looking for (Q). So “I” might look for the “doing word” feature and “like” and “learning” would offer that feature. In the matrix dot product for the “doing” feature, “I”, “like” and “learning” would shine. This product is also called the compatibility matrix since it captures how compatible each word is with every other word and hence the extent to which the features of the compatible words should be baked into each query word.
  • We need a consistent way of storing meaning for the computer since two different human language sentences or ‘word mixes’ could yield the same deep meaning (e.g. “a likes b” and “b is liked by a”) and vice versa i.e. two identical sentences could give a different deep meaning depending on what words came before them. Q.K gives us that.
  • V contains the values or weights for each word’s features e.g. we can imagine features like
    • is an action
    • is a thing
  • When V is then multiplied by Q.K we get a numeric matrix that we can use to represent the meaning of the word mix. Subsequent steps in the model can then predict (from the model’s historic training data of English sentences / knowledge) which word likely comes next after encountering this particular meaning.

Hope this helps – if you find a better explanation or ‘intuition’ of qkv please do leave a comment!

Wonka (2023) film review – AI Kermode take one!

If you’ve listened to the Mark Kermode and Simon Mayo podcast or watch their YouTube channel you’ll already know which of their opinions is the best barometer of your own.

I enjoy watching their discussions about movies but sometimes wish I could quickly jump to their main conclusions about a film to see whether it might be for me and hence watching their full review worthwhile without ploughing through the whole episode.

I used Google Bard’s recent YouTube video summarisation skill to try to get exactly this and you can see its results in this ultra short 54 second video.

If it peaked your interest check out the full Kermode and Mayo video – and see how good a job Bard did.

Or check out their podcasts (requires a subscription)

https://open.spotify.com/show/13ZnvaTeGK9WTQy19T8Ep3

Which Oral-B cleans best: Pro 3 3000 v 3500 v iO? What does Whitening mode do? CrossAction v Precision v FlossAction?

This blog is now available as a 59 second short video.

If your old toothbrush has seen better days but you’re confused about which Oral B model and brushing mode offers the best tooth cleaning and value for money, you’re not alone.

My old Braun Oral-B TriZone electric toothbrush – type 3756 shown on base
Still working after 10 years but battery only lasts 2-3 days

There are 3 things to consider – model, mode and brushes.

The Oral B Pro 3 and iO models are popular choices but what’s the difference and does the more expensive iO clean better?

Well the Pro 3 3500 is the same as the 3000 but includes a travel case and only comes in pink and black. The iO range has a magnetic motor which is quieter but its heads are over twice the price, not proven to clean better than the Pro 3 and not compatible with the heads of earlier Oral-B models.

The Pro 3 and iO both have the “3d Action” of oscillate (move back and forth), rotate and pulsate (vibrate against the tooth) but also a “Whitening” mode which varies oscillation speed up and down for deeper cleaning. There are further modes however all Pro3 and iO models also have “Daily Clean” and “Sensitive” modes and these 3 modes are sufficient for most people.

Finally onto heads. CrossAction heads have angled bristles for better cleaning though may be less comfortable than the basic Precision heads and the Floss Action head has micro pulse bristles for better interdental cleaning though doesn’t replace flossing. Some prefer the Deep sweep or trizone head which is shaped more like a manual toothbrush.

Old Oral-B Pro 2 2000 (Trizone head) v new Pro 3 3000 (CrossAction head)

Diagrams and details are available at electricteeth.com, animated-teeth.com and oralb.com:

https://www.electricteeth.com/uk/oral-b-cleaning-modes-explained/

Oral B models compared (electricteeth.com)

https://www.animated-teeth.com/electric_toothbrushes/oral-b-electric-toothbrush-models.htm

https://www.oralb.co.uk/en-gb/oral-health/why-oral-b/electric-toothbrushes/best-toothbrush-head-for-you