What it means to be an iguana: the Jaccard index

In the end, what does it really mean to be an iguana, and how could you tell?

The big thing in language these days is distance-based representations of semantics.  The idea is that the meaning of a word can be discussed in terms of its closeness to, or distance from, other words.

How the hell would you measure that?  Current approaches to distance-based semantics are based on something called the distributional hypothesisthe idea that a word’s meaning is, in essence, the set of words that it occurs with.  (With which it occurs, if you prefer.)  When you have sets, you can calculate the distance between (or closeness between–it doesn’t matter what you call it) those sets.  I’ll give you an example of this in which we’ll use a distance metric (metric, in this case, means a number that measures something) called the Jaccard index.  It’s based on counting the number of things that two sets have in common and then adjusting it with respect to the total number of things in the sets.

Let’s walk through the intuitions behind the Jaccard index.  The first intuition: the more things that you share with another set, the more similar to that set you are.  Let’s think about two sets of words:

Set 1 fur eat pet play ball
Set 2 fur eat pet sleep mouse

What do those two sets share?

  1. fur
  2. eat
  3. pet

That’s three things.  Now let’s look at Set 1 again, versus a third set:

Set 1 fur eat pet play ball
Set 3 scales eat sun sleep climb

How many things do they share?

  1. eat

Based just on the counts of things that these three sets have in common, you might say that Set 1 and Set 2 are the most similar to each other, since they have the most things in common.

Now, it’s a bit more complicated than this.  Think about these two pairs of sets, and tell me which you think is closer: Set 3/Set 4, or Set 1/Set 2?  Here’s Set 1/Set 2 again:

Set 1 fur eat pet play ball
Set 2 fur eat pet sleep mouse

…and here’s Set 3/Set 4:

Set 3 scales eat sun sleep climb
Set 4 scales eat sun sleep climb strike hiss bird molt brumate

To brumate: similar to hibernating, but the state of dormancy is not as deep.

Set 1/Set 2 share 3 things.  Set 3/Set 4 share even more–5 things.  But, how much more similar does that make them?  I’m going to suggest that it’s not as much as you might think.  The reason that I’m saying this is that the fact that Set 3 and Set 4 share as much as they do has to take into account the fact that Set 4 has more things in it than any of the other sets have.

learn more about brumation 6c505dfd5f0942f62981d4f820f09207
Picture source: https://goo.gl/zjuxaC

How can we take this difference in the set sizes into account?  We’ll do something called “normalizing” the count of the things that they share: we’ll make it relative to the sizes of the sets that we’re comparing.  How we’ll calculate the sizes of the sets: we’ll count up the total number of words that you would get if you added both sets of words together, and only counted each unique word one time.  We’ll go back to Sets 1 and 2:

Set 1 fur eat pet play ball
Set 2 fur eat pet sleep mouse

What are the unique words in the combination of both sets?

  1. fur
  2. eat
  3. pet
  4. play
  5. sleep 
  6. ball
  7. mouse

There are 10 total words in the two sets, but if you only count each word once–each unique word, that is to say–you have 7.  Now let’s look at 3 and 4, this time counting the unique words that are found in the combination of the two sets:

Set 3 scales eat sun sleep climb
Set 4 scales eat sun sleep climb strike hiss bird molt brumate
  1. scales
  2. eat
  3. sun
  4. sleep
  5. climb
  6. strike
  7. hiss
  8. bird
  9. molt
  10. brumate

To normalize the number of things that two sets of things have in common by the total number of types of things in the set, we divide the number of things that they have in common by the total number of things.  So, for Set 1 and Set 2:

3 things in common / 7 types of things = 0.43

For Set 3 and Set 4:

5 things in common / 10 types of things = 0.50

…and those are the Jaccard indexes for Set 1 and Set 2, and for Set 3 and Set 4.

Let me give you one more pair: Set 1 and Set 1.  If you calculate the similarity between a set and itself, you get a value of 1.0.  What you should take from that is that the range of values for the Jaccard index is from 0.0 to 1.0.  Knowing that, you have a point of reference: if the Jaccard index is close to 1.0, then the two things are very similar (because identical things give you a Jaccard index of 1.0).  On the other hand, if two things are very different, then you’ll have a Jaccard index that’s close to zero.  This might seem obvious, but imagine if there were no upper limit on how big the Jaccard index could get.  What would 20 mean?  What would 4,808 mean?  Who the hell knows?  Metrics in the range of 0.0 to 1.0 are the ginchiest.

So, now that we have a way to quantify the similarity (or difference) between two sets based on the quantity of things that they share, normalized by the total quantity of things: suppose that those things are the other words that some word occurs with.  If you replace the names of the sets like this:

  1. Set 1: dog
  2. Set 2: cat
  3. Set 3: iguana
  4. Set 4: snake
slide_54
We’ve looked at one measure of similarity/difference–the Jaccard index–but, there are others. Here’s a small sample. Picture source: http://images.slideplayer.com/24/6982424/slides/slide_54.jpg

…then you could imagine the words in those sets being the words that dog, cat, lizard, and snake occur with.  When we calculate our numbers, we end up with dog being more like cat than it is like iguana or snake.  In contrast, our numbers are consistent with the idea that iguanas are more like snakes than they are like dogs or cats…and that’s one way that you can think about quantifying the similarities between the meanings of words.

john_rupert_firth
John Rupert Firth. Picture source: Public domain, photographer unknown.

These particular kinds of distributional representations of word meanings go back to the 1950s and the work of John Firth, who famously (OK: famously among linguists) said that “you shall know a word by the company it keeps.”  Distributional representations suddenly become popular in the language processing world (surprising, to some extent, because the language processing world is populated much more by computer scientists than by linguists) a few years back, for two reasons:

  1. Thanks to the Internet, we now have access to quantities of textual data that are big enough to be able to calculate reliable quantities–you need a lot of data to actually make this kind of approach work.
  2. People have recently had some success with figuring out ways to do calculations of these numbers in ways that are efficient enough to be able to handle those enormous quantities of data without bringing every supercomputer in the world to its knees.  If you tried to do something like this naively, you would be calculating the similarity between every word and every other word; no one actually knows how many words there are in (to take one example) English, but you’re probably talking about a table with 10,000,000,000 cells in it.  A few years ago people came up with a couple ways of reducing that number drastically, and that makes it practical to do the calculations and to store their results.  (If you could do one calculation a second, it works out to a bit over 19,000 years.)  Now my laptop can crunch the numbers for a few million words or so worth of text overnight.

We’ve talked about calculating the Jaccard index today (shared things divided by total things), and calculating it on the basis of words.  That’s a very straightforward way of doing this–the Jaccard index is the simplest distance metric that I know of, and words are the easiest things to count.  However: words are actually much more difficult to count in real life–or even to define–than they seem to be in the examples that we looked at, and there are lots of other things that one could count that might work out better.  There are also different ways to define what counts as “occurring together.”  To give you some examples of the kinds of questions that you need to think about in doing this kind of thing:

  1. Words: What is a unique word?  Do you want to count Dog and dog as the same word, even though one starts with an upper-case letter, and the other starts with a lower-case letter?  Do you want to count reproduisisse, reproduisît, reproduisissions, reproduisissiez, and reproduisissent (the forms of the imparfait du subjonctif of the French verb reproduire, “to reproduce”) as the same word?  How about pet peeve and bête noire–do those count as one word, or two?  Do you want to count bete noire as the same word as bete noir, bête noire, and bête noir?  (More generally: do you count an incorrectly spelt word with its correctly-spelled equivalent?  If so: how the hell do you spell-correct the Internet?)
  2. Things to count: Do you want to count $1, $2.25, 50%, and 75% as 4 different things?  Maybe you want to consider them all as numbers, in which case there is just 1 “thing?”  Maybe you want to count $1 and $2.25 as prices, and 50% and 75% as percentages, in which case there are 2 things?
  3. What “occurring together” means: Is it occurring in the same sentence?  The same newspaper article?  The same book?  Maybe it means occurring within two words to the right or within two words to the left–i.e., occurring within the four surrounding words?

…and that’s the kind of thing that will keep graduate students busy for the next 5 years or so, unless something else becomes au courant in the meantime (au courant discussed below in the French and English notes), in which case all of the grad students who were betting their careers on the latest cool thing will be spending some time engaging in some serious nombrilisme and then either starting all over again or quitting grad school and going into building better search engines for Twitter or something.  Welcome to my world.

For more information on distance-based semantics and its alternatives, see Elisabetta Jezek’s The lexicon: an introduction.

I got into this 2400-word little essay in the course of trying to come up with a way to respond to a series of comments on my last post in which we got into a discussion of whether or not the English word bete noire means the same as the English word pet peeve (see how I snuck an assumption in there about how many “words” are in bete noire and pet peeve?)  Obviously I went down a bit of a rabbit-hole here.  More on the bete noire/pet peeve thing some other time, if Trump doesn’t nuke some country because the president said something mean about him (remember how he was saying that Hillary wasn’t “tough enough?”) and bring the world as we know it to an end, along with all of the electricity.  A quick discussion of some relevant French and English words follows.


French and English notes

au courant: This expression exists in both English and French, but with different uses in the two languages.  In French, it means something like “up to date,” and is used to describe people.  In English, it can be used in the same way, but is also (and I think more commonly, although I don’t have the data to demonstrate this, one way or the other) used to describe things, in which case it means something like “in fashion.”  Additionally, in English, this is a very high-register word–you wouldn’t use it with just anybody.  Here are some French examples from the frTenTen12 corpus, a collection of 9.9 billion words of French scraped off of the Internet that I searched via the Sketch Engine web site:

  • Nous tiendrons nos lecteurs au courant de cette tentative…
  • …que Dieu t’entende pas petite Marie Bon courage et tiens nous au courant
  • Vous êtes au courant de ces dangers, vous devez donc protéger votre PC contre toutes intrusions.
  • …mea culpa, je n’étais pas au courant
  • …ni les Etats-Unis ni l’URSS n’ont été au courant de cet événement…
  • Peut-être que le jeune mutant était au courant , aussi elle décida de l’attendre devant la porte.

I like the second-to-last one, because it describes two countries there, rather than the two people that you would expect.

To find examples of au courant in English, I went to the enTenTen13 corpus, a collection of 19.7 billion words of English-language text, which, again, I accessed through Sketch Engine.  Here is some of what I found:

  • …a library of au courant phraseology and jabber…
  • Where once the adage “Things go better with bacon” was au courant ,”Things go better with cheese” is timeless.
  • That isn’t to say that paisley prints are reserved solely for custom-fitted, au courant French fashion houses; just the opposite.
  • Pappardelle is the au courant cut of pasta right now…
  • It’s all very au courant , yet it’s not at all.
  • Being au courant can be its own sort of stultifying endgame.

Comparing the experience of putting these two lists together, I can tell you that I had to hunt to find examples of au courant in French where it wasn’t modifying a human, and I had to hunt to find examples of au courant in English where it was modifying a human (my last example here is the closest that I came).  Here’s how it was used in the post: That’s the kind of thing that will keep graduate students busy for the next 5 years or so, unless something else becomes au courant in the meantime, in which case all of the grad students who were betting their careers on the latest cool thing will be spending some time engaging in some serious nombrilisme and then either starting all over again or quitting grad school and going into building better search engines for Twitter or something.  

Suggested readings for Week 5

If you’re a regular reader of this blog, you probably have some interest in language–language in the abstract, or le langage in French, as opposed to (or in addition to) any particular language, or la langue in French. I’m reblogging here the reading list for a week of a course on language processing that I’m teaching at the moment. The theme of the week is data in language processing: what you (might) mean when you talk about “data” with respect to language; what kinds of data there are; where that data comes from; and how to make some data if you can’t find the kind of data that you need.

I’m posting this particular reading list because I often suspect that many people who know that I’m a linguist imagine that I spend my days sitting around discussing how funny irregular verbs are, or how cool it is that French has three verbs that mean “go back,” or whatever. What you’ll find on this list has very little to do with coolness or lack thereof, and a lot to do with data formats, data set sizes, statistics, and a bit on ethics. Personally, I find this stuff fascinating–but, it’s often worth getting a glimpse at what we call in my field “the sausage-making process.” Enjoy! (Or go watch the latest episode of “The Walking Dead”–it’s pretty good.)

Natural Language Processing

Here are some suggested readings for Week 5.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the preceding sentence).  All of them are available for free on line except for the books (although the Good and Hardin book is available for free, as well).  All of them should be available in an academic library.  Feel free to contact me if you have trouble finding a copy of either.

View original post 360 more words

The zombie apocalypse and education in the computational sciences

How to respect both logical positivism and the zombie apocalypse while educating computer scientists.

Screenshot 2017-03-10 04.25.01
zombilingo.org, a web site that supports research on what linguists call the “heads” of groupes nominaux (“noun phrases,” in English).

In my professional life, one of my pet peeves is scientific discussions that involve the verb to believe.  For example:

  • …we believe that [joint circumscription] will be important in some AI applications.  (John McCarthy, Circumscription–A form of non-monotonic reasoning, publication date unclear) 
  • We believe ontologies are key requirements for building context-aware systems… (H. Chen, T. Finin, and A. Joshi, An ontology for context-aware pervasive computing environments, 2003)
  • We believe enzyme-loaded erythrocytes may have therapeutic possibilities for several diseases.  (Ihler et al. 1973, Enzyme loading of erythrocytes, which I should note has been cited over 300 times nonetheless)

I have actually been–on multiple occasions–cautioned against using formulations like Je pense que… (“I think that…”) in some professional situations in France, as it’s considered a sign of having a position that you’re not actually confident that you can defend.  (Native speakers, can you comment on this?)

I’m not shy about bringing up my problems with the verb to believe in any discussion in which I find myself that claims to be scientific, be those lab meetings or reviews of papers/grants/whatever.  I would not label myself as a logical positivist, but I try to always keep in mind the potential logical positivist position–it’s not a bad foundation for a philosophy of science.  (See, I didn’t say I think that it’s not a bad foundation for a philosophy of science–I flat-out asserted it.  In academic writing, I would follow that assertion with a few credible citations.)

Follow these links for more information on the zombie apocalypse and…

In light of that tendency of mine towards the empirical and the epistemological, students are often surprised to learn of my concerns regarding the upcoming zombie apocalypse.  Clearly, zombies are something about which I have no empirical data, and one would have to classify the upcoming zombie apocalypse as something about which I have beliefs, but not knowledge, and therefore outside of the realm of something that I would talk about in my professional life.  So, yes: students are surprised when I bring it up.  (As far as I can tell, my French colleagues just think I’m crazy, or chalk it up to some quirk of the Anglo-Saxon psyche, or something.  I actually have no clue what my American colleagues think.)

Here’s the thing: the zombie apocalypse is an engaging point of entry into the problem of making robust systems.  In the context of computer programming, you could think of “robustness” as the ability of a program do deal with the unexpected–making speech recognition systems that will work in a crowded restaurant (impossible 20 years ago, not unusual today), or building sentence analyzers that won’t reformat your hard drive if someone passes them a sentence in Uzbek. In particular, the upcoming zombie apocalypse is an engaging entry point to the problem of how to think about the problem of making robust systems.  The issue is that a major contributor to robustness is planning for unanticipated inputs (I had English in mind when designing my sentence analyzer, and then someone gave it a sentence in Uzbek) or operating conditions (I never thought about someone trying to use my speech recognition system with a lot of noise in the background).  Seulement voilà–the thing isit’s the nature of unanticipatedness that we have trouble coming up with the unanticipated.  Even more fundamentally a problem: we often have trouble getting into the mindset of taking seriously the very idea that unanticipated inputs or operating conditions are even plausible.  In fact, they are; but, how to get students to think about something that is, a priori, difficult to conceptualize?  Posing the question as how will your approach work when the zombie apocalypse comes? typically leads to a laugh–and seems to give one a way to think seriously about what kinds of things might happen that you haven’t actually thought about yet.  To think seriously about things that it’s difficult to think about by means of thinking non-seriously about things that don’t exist, you might say.  You might say that–if you haven’t really thought about the upcoming zombie apocalypse.


English notes

pet peeve: something that annoys a specific person a lot.  To call something a pet peeve, it should be rather specific to that person, especially with respect to the extent that it bothers them or with respect to the extent that they are sensitive to it.  For example, traffic jams wouldn’t really be anybody’s pet peeve–everybody is annoyed by traffic jams.  However, traffic jams caused by trash trucks doing their collections during rush hour could be someone’s pet peeve-say, if they happen to actually notice them more than most people would, in a situation where most people don’t particularly care whether or not a traffic jam was caused by a trash truck doing its collections during rush hour–they are equally annoyed by all traffic jams.  How it was used in the post: In my professional life, one of my pet peeves is scientific discussions that involve the verb “to believe.”  

French notes

la robustesse: robustness.  You can use this in a lot more ways in French than in English.  For example:

  • Hardiness would probably be the English-language equivalent here, where we’re talking about plants and their illnesses: Différentes maladies peuvent entraîner un flétrissement des tubercules qui se traduit, à son tour, par une perte de robustesse des plants.  (Source: Sketch Engine web site)
  • Toughness would probably be the equivalent here, where what’s being discussed is fabric: Ce tissu se distingue par sa robustesse, sa longévité et son confort.  (Source: click here)

Да, да: How to irritate a linguist, Part 3

There are soooo many ways to annoy a linguist–here’s a good one.

bulgarian flag bg_971
Favorite thing that I ever heard anyone say in Bulgarian: “Ani says that our lipstick should be as red as the bottom of the flag.”  Picture source: http://www.worldstatesmen.org/Bulgaria.html

My airport shuttle pulled into a truckstop in Bulgaria, someplace between Sofia and Hissarya, so that we could all get out and stretch our legs.  I paid the nice little old bathroom attendant my 50 stotinki and walked to the stalls, feeling proud of myself because I knew how to recognize the men’s room and the ladies’ room in Bulgarian Cyrillic.  My hand froze as I reached for the door when I heard the nice little old lady scream другата, другата!!!–I spoke just enough Bulgarian to know that that means the other one, the other one!!!  You know what they say: pride comes before a fall.


For your amusement, here is an article with many correct observations, and nothing but incorrect conclusions.  The article is about the English language.  The incorrect conclusions start with the first sentence, where one is stated quite clearly, and then they continue through the body of the piece, becoming increasingly more academic/less clear.  (En clair: “in plain language, in plain English; not scrambled (e.g. a TV show).”)  For my amusement, here are some comments while I wait for the coffee to finish.  Quotes are in italics.

First sentence: English is a difficult language.  What would it even mean for a language to be “difficult?”  That children can’t learn to speak it natively?  No such language exists.  That you had trouble with it in high school?  That’s not exactly a convincing form of evidence.  That you think that people with differently-colored skin/from different parts of the country/from different social classes than you don’t speak it correctly?  Fuck you.

Second sentence: It’s irregular (teachers taught, preachers praught?), single words take on multiple meanings (‘set’ has 464 definitions in the Oxford English Dictionary) and its pronunciation is fiendish (cough, though, bough, through, tough, plough).  I love the “teachers taught, preachers praught” thing–super cute–but have no idea why one would think that irregularity was any more difficult than anything else.  From a hearer’s perspective, the discriminative power of irregulars is much higher.  To think about it from a computational perspective: irregulars don’t require any processing–you just consult the equivalent of a mental “dictionary.”  In contrasts, regulars require that you chop things up (e.g. if it were teached, you’d have to separate teach and -ed) and then figure out what they were (the -ed of kissed is not pronounced the same as the -ed of hugged), and then look up what they meant.  You can argue that the claim that I’m making here about increased discriminative power for the hearer means increased memory load for the speaker, and you’d be right–which is why linguists don’t bother talking about this kind of thing very much.  You quickly come back to the question that I raised about the first sentence: what would it even mean to be a “difficult language?”

The author continues:

This isn’t only about our famous English sarcasm. It’s a sign that the words themselves are not as reflective of our thought as, perhaps, they should be. When I say, ‘I beg your pardon’, I can mean ‘I apologise’, ‘I didn’t hear you’, or most probably ‘I’m absolutely fuming at what you said’. No wonder English is famed for being such a tricky language to learn, if what we’re trying to get across is based really on our tone of voice more than anything else.

Does the author really believe that this is something that is unique to English?  As far as I can tell: yes, he does.  But: it isn’t.  Here’s an example from Bulgarian:

Да, да.

What those words are: yes, yes.  What that means: No.  You can tell the difference from the intonation.  Is that kind of thing interesting?  Absolutely.  Is it in any way unique to English?  No.

…single words take on multiple meanings…  From my perspective, that is a confusion about what a “word” is.  If a word is a relationship between a sound and a meaning, then the thing to say here would be that there are multiple words in the language, some of which sound the same as other words.  If that’s the case: big deal.  Do you know of any languages where that’s not the case?  I don’t.

…and that, my friends, is yet another way to irritate a linguist: say dumb shit about how special Language X is because you don’t know anything about any other language, and therefore don’t actually have anything to compare it to.  (The author does mention Mandarin, but what he says about it is too stupid uninformed for me to respond to before I’ve had another cup of coffee.  Back reviewing grant proposals–computational linguistics isn’t all beer and pétanque

How defending Trump is like defending domestic abuse

From a linguistic perspective, defenses of Trump have a lot in common with domestic abuse. Here’s how that works.

screenshot-2017-02-26-02-10-47
Source: https://goo.gl/kQeAju

Data point: back in the United States, our new President has been gleefully violating our Constitution, or at least trying to, to the very best of his ability.  The heart of American political philosophy is expressed in the First Amendment to the Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

What our new President’s been up to:

  • Banning people from entering the United States based on their religion.  Quote from December 7th, 2015: Donald J. Trump is calling for a total and complete shutdown of Muslims entering the United States.  (Yes, he is enough of an asshole to refer to himself in the third person.)
  • Attacking the press.  Tweet from February 17th, 1:48 PMThe FAKE NEWS media (failing , , , , ) is not my enemy, it is the enemy of the American People!
screenshot-2017-02-26-02-14-35
Source: https://goo.gl/YswwKk

It interests me that many of his supporters defend his un-American actions based on the argument that he’s “just” doing what he said he would do.  There seems to be some implicit claim that if you say that you’re going to do it, then it’s OK to do it.  Some examples:

  • “He was simply doing what he said he was going to do in the campaign,” Paul Hess told the Times.  (source)
  • President Trump is, after all, just doing what he said he would do. And, in a representative democracy, that’s something to be respected.  John E. Stafford, letter to the New York Times
  • Amid all the wailing and gnashing of teeth, President Trump reminded us that he was just doing what he said he would do.  “We have really done a great job. We’re actually taking people that are criminals, very, very hardened criminals in some cases, with a tremendous track record of abuse and problems, and we’re getting them out,” Trump said. “And that’s what I said I would do. I’m just doing what I said I would do.”  Sean Hannity (source) (includes video of Trump saying exactly that)

As far as I know, this is not the case–either from a legal perspective, or from an ethical perspective.  If I say to you If you drink my Coke again, I will punch you in the face, I’m going to arrested if I do, in fact, punch you in the face–having said in advance that I was going to do it does not make it legal.  It does not make it ethical, either.

“I warned her I would kill her if she went with other boys,” he added. He said that Sunday afternoon she went to a show with another boy and that “she broke her promise at other times.” “I kept my promises and she broke hers. I loved her very much,” he added.  –Source: https://goo.gl/X05M2x

The whole phenomenon reminds me of the stereotype of domestic abusers: This is your fault–I told you I would hit you if you talked to him again.  I told you I would whip you if you didn’t come straight from school.  I told you I would kill the kids if you tried to leave me. Do domestic abusers actually do that kind of thing?  Read the quotes.

Now, there’s an interesting little linguistic thing going on in the quotes from the Trump defenders.  Let’s look at the quotes again:

  • “He was simply doing what he said he was going to do in the campaign,” Paul Hess told the Times.  (source)
  • President Trump is, after all, just doing what he said he would do. And, in a representative democracy, that’s something to be respected.  John E. Stafford, letter to the New York Times
  • Amid all the wailing and gnashing of teeth, President Trump reminded us that he was just doing what he said he would do.  “We have really done a great job. We’re actually taking people that are criminals, very, very hardened criminals in some cases, with a tremendous track record of abuse and problems, and we’re getting them out,” Trump said. “And that’s what I said I would do. I’m just doing what I said I would do.”  Sean Hannity (source) (includes video of Trump saying exactly that)

Note the words simply and just.  They have a very specific function here.  You can think of it as justification through minimization: their goal is to communicate the idea that what follows is not a bad thing, specifically by minimizing it relative to things that would admittedly be bad.  It’s quite complex, because it starts with a concession–with an implicit agreement that if you had done what the other person states that you did, then it would have been bad.  But, that concession is then followed with an argument that what you did was not, in fact, that, and since it wasn’t that, then you are meant to accept that it was not bad. 

The English words just and simply have a lot of meanings.  This particular meaning gets used in a couple very particular ways.  I’ll give you the more complicated one first, because it’s actually easier to see how they function in the more complicated case:

  • Yes, I did sorta take some of your sandwich, but I really just tasted it.
  • Mom, I didn’t hit him–I just touched him hard.
  • Don’t get mad at what I said–it was just a joke.
  • I’m not being mean–I’m simply stating a fact.  You are fat, old, and bald.

The structure of all of them works something like this: there’s this thing that you think that I did, and if I had done it, sure, maybe that would have been bad.  But, I didn’t–I did some lesser thing, and since it’s not the bad thing that you mentioned, it’s not bad.  So:

  • Yes, I did sorta take some of your sandwich, but I really just tasted it.  The structure: if I had really taken some of your sandwich, then that would have been bad–but, I didn’t.  I did a lesser thing: I tasted it.  Since that’s not the bad thing, then it’s not bad.
  • Mom, I didn’t hit him–I just touched him hard.  The structure: if I had hit him, then sure–that would have been bad.  But, I didn’t hit him–I did something less than that, and since it’s not that bad thing, then it’s not bad.

This is a fallacious argument.  Suppose that there is some bad thing–let’s call it X.  The fact that something is not X does not mean that it is not bad.  The fact that something is “less” than X does not mean that it’s not bad, either.  But, that’s exactly the implication behind the whole “he’s just doing what he said he would do” attempt at a justification.  In fact, “just” is being used here without the concession–it’s pure minimization.  It’s adding to the assertion he’s doing what he said he would do an adverb that is meant to convey that the doing is something less than something else–specifically, less than bad.  

At some point, the current insanity is going to end–America always rights herself, eventually.  How?  Who knows?  Maybe Trump will throw one of his little hissy fits and resign.  Maybe he’ll nuke somebody, and somebody will nuke him, and the world as we know it will end.  These days, it’s tough to be surprised.  One thing that I am, however, sure of: history is not going to look kindly on this period, and it’s not going to look kindly on the people who supported Trump.  Are they all going to go to jail?  Of course not.  Are their grandchildren going to be ashamed of them?  Probably.  You have a choice to make here: collaborate, talk back, or just keep your head low.  There’s only one of those that you won’t be ashamed to tell your grandkids about.