#63 Marginal Gains - Part 3 (2024)

  • Dangerous visions: How the quest for utopia could lead to catastrophe | Salon.com

    o Visions of utopia are ubiquitous throughout Western history. They've inspired great works of art and literature, motivated countless believers to obey God's commandments and driven some of the bloodiest conflicts in the collective biography of our species.

    o Utopian visions are also a central feature of the hype around artificial general intelligence, or AGI. In an article titled "Why AI Will Save the World," the tech billionaire Marc Andreessenwritesthat advanced AI systems will enable us to "take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel." The CEO of OpenAI, Sam Altman, similarlydeclaresthat with AGI "we can colonize space. We can get fusion to work and solar [energy] to mass scale. We can cure all diseases." Utopianism iseverywhere in Silicon Valley.

    o The problem is that utopia has a menacing underbelly.

    o First, itspursuitcan cause profound harms to those who happen to be standing in the way. This is why utopian fantasies have fueled some of the worst atrocities in history: If the means are justified by the ends, and the ends are quite literally a utopian world of infinite orastronomical amounts of value, then what exactly is off the table when it comes to realizing those ends? Companies like OpenAI have engaged in massiveintellectual property theft, resulting in aslew of lawsuits, and systems like ChatGPT are built on thebrutal exploitationof people in the Global South, some of whom were paid$1.32 per hourto sift through some of the most horrendous material on the web. These harms are surely worth the benefits, given that, in Altman'swords, "we are only a few breakthroughs away from abundance at a scale that is difficult to imagine."

    o Second, therealizationof utopia could also havecatastrophic consequences, as most utopian visions are inherently exclusionary. There is always someone who is purposely left out in any imagined utopia — some undesirable group whose presence in paradise would disqualify it from counting as such. If the Christian heaven were to include atheists, for instance, it wouldn'tbeheaven. Hence, one should always askwhoa particular utopian vision is for. Everyone, or just a select few? If so, which people are allowed in and which are banished to perdition, if not sentenced to be annihilated?

    o Pretty much the only place where marginalized peoples exist in sci-fi and futurist visions have been in dystopias (and their presence is often perceived as a signifier of dystopia), because there's literally no place made for them in utopia, given the eugenic and exclusionary nature of utopianism.

    o So why do we endlessly rehash these exhausted narratives and visions of the doomed future instead of using our time, energy and talent to envision what an actual liberation for oppressed peoples and a regenerative, life-centric society could look like? This is what the real danger of both utopian and dystopian visions is: They can have a toxic effect upon our imaginations, by distracting us away from both present-day oppression and liberatory future possibilities.

    o The "Eremocene," or "Age of Loneliness," which describes a time when we have extinguished so many other species and become increasingly isolated as a human species on this planet — a kind of existential isolation and loneliness that results from being separated from the biosphere through this violent genocide of species and the extinction of their sensory worlds.

    o The easiest way to measure the genocidal capacity of any given utopia is to look at how it treats marginalized peoples, especially those at the intersection of indigeneity, queerness and disability.

    o When Spike Lee released a commercial about how crypto is the new money, it utilized a lot of really talented, prominent Black, brown and queer creatives to promote a vision that is fundamentally about extracting from their very communities. So even though some of the people involved may have benefited from those ads, their communities were ultimately harmed by the crypto push. That's one of a million examples of predatory inclusion.

    o "The way I see it, techno-utopian visions of a colonized cosmos and transcended Earth are about finding ways to justify human and biosphere genocide happening today — in light of those grand visions, extinction of species is ultimately 'not that important.'"

    o The talk of humanity becoming "multiplanetary" is just a way to put a sci-fi smokescreen up to the media and general public—capitalism always needs a new frontier, so space colonialism is this kind ofdeus ex machinato detract us from the reality that there is no "infinite growth" on a finite planet, and that we need fundamental restructuring of our societies and economies based on principles of equity and justice.

    o "Many of the richest and most influential men in tech never really grew out of that teenage phase of being fanboys of particular sci-fi authors, movies or series. They cling to these sci-fi fantasies of eternal lives in the cosmic matrix, even though the bleeding edge of scientific research suggests that minds cannot just be reduced to a digital program, because our consciousness is embodied and interconnected with an ecosystem that it's codependent with.

    o Similarly, with AI, the more you talk about these visions of artificial general intelligence, the easier it is to divert attention away from the real issues of how these very fallible yet increasingly dangerous AI tools are being designed, used and abused. What bias gets embedded within them, whose data gets expropriated for it, who gets the access and what type of behavior and manipulation does this allow and to whom.

    It's not that they don't know how to read dystopian narratives critically, or that they fully buy into technology being the magical panacea for problems that are fundamentally social, cultural and political. It's that they actually see how dystopias (sometimes disguised as utopias) can be used as product roadmaps, not just because there's money to be made while the world burns, but because there's money to be made by setting the world on fire. Dystopia is not a bug, it's a feature. It will take all of us to resist it, and to fight for the kind of future that is actually livable. We must do all we can to resist these lures of eschatological tech theologies and accelerationist fantasies, because they are designed to benefit the few, while harming, if not outright extinguishing, the rest of us.

  • As hospitals use AI chatbots and algorithms, doctors and nurses say they can’t be replaced - The Washington Post

    AI Summary: Mount Sinai Hospital is using AI software and education to shape the future of medicine, but some healthcare workers are worried about the technology replacing humans and making wrong diagnoses. AI algorithms are used to spot abnormalities in X-rays, CT scans, and MRI images, as well as to predict whether a specific patient will suffer from an ailment. There is concern that the technology is being overhyped and used without regulation, and that it could be used as an excuse to cut staff. Some medical professionals are also worried about bias in the technology. Despite these concerns, AI is being used to identify patients at risk of issues such as sepsis or falling, spot breast cancer, and flag those who are likely to be malnourished. Ultimately, the goal is not to replace health workers, but to get the right doctor to the right patient at the right time.

    o AI readings of mammograms detected 20 percent more cases of breast cancer than radiologists — along with the conviction that AI is the future of medicine.

    o Researchers are also working to translate generative AI, which backs tools that can create words, sounds and text, into a hospital setting. Mount Sinai has deployed a group of AI specialists to develop medical tools in-house, which doctors and nurses are testing in clinical care. Transcription software completes billing paperwork; chatbots help craft patient summaries.

    o The advances are triggering tension among front-line workers, many of whom fear the technology comes at a strong cost to humans. They worry about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

    o In March, the University of Kansas health system started using medical chatbots to automate clinical notes and medical conversations. The Mayo Clinic in Minnesota is using a Google chatbot trained on medical licensing exam questions, called Med-Palm 2, to generate responses to health care questions, summarize clinical documents and organize data, according toa July reportin the Wall Street Journal.

    o “You cannot transplant people,” Fuchs said. “But you can transplant knowledge and experience to some degree with these models that then can help physicians in the community.”

    o Critical care physicians are piloting predictive software to identify patients who are at risk of issues such as sepsis or falling.

    The ultimate goal is not to replace health workers, but something more simple: getting the right doctor to the right patient at the right time. But some medical professionals aren’t as comfortable with the new technology.

    o Though AI can analyze troves of data and predict how sick a patient might be, Mahon has often found that these algorithms can get it wrong. Nurses see beyond a patient’s vital signs, she says. They see how a patient looks, smell unnatural odors from their body and can use these biological data points as predictors that something might be wrong.

  • The Next Next Job, a framework for making big career decisions at andrewchen

    AI Summary: Andrew Chen presents the "Next Next Job" framework for making big career decisions. This framework involves asking the question, "What do you want to be your next next job? And why can't you get it right now?" to evaluate potential job opportunities. He suggests that people look at the big picture to make sure they are choosing the right job for their long-term career goals. He also advises to look for a "superpower" that might be necessary for the job, as well as any gaps that need to be filled. He shares a personal story of how he used the framework to make a big decision, and encourages others to take their time when making a career-defining decision.

    o Here’s my favorite question to ask:

    “What do you want to be your next next job? And why can’t you get it right now?”

    And then, of course, you work backward from that. This is the “Next Next Job” framework for thinking about career moves, particularly in the highly chaotic situations that we find ourselves in today where there are many many opportunities across different industries and company stages.

    o On the other hand, often the next next job isn’t attainable and it’s for good reasons. Maybe you’ve only worked at a series of failed startups, and you need a “shiny” role or two that helps add some credentialing. Or perhaps you’re in marketing and interested in becoming a PM but aren’t yet close enough to the engineers and the technical details. Perhaps you’ve never managed anyone, and want a role to demonstrate strong managerial ability before jumping into a team lead role. Identifying these gaps can help form the basis for evaluating potential job opportunities — which ones help fill them better and faster.

    o For someone interested in investing as their next next job, and have substantial internal-facing roles at successful startups, I often find the list looks something like this:

    · The next next job: Become a professional investor

    · Gap: Need to develop a personal brand for other external-facing networks

    · Gap: Haven’t done any angel investing

    · Gap: Need to develop opinions on cutting-edge spaces

    · Potential superpower: Get in the dealflow of recent spinouts/alumni of my prev companies

  • Ralph Waldo Emerson: On Love, Beauty and the Purpose of Life – Excellence Reporter

    AI Summary: Ralph Waldo Emerson was a 19th century American essayist, lecturer, philosopher, and poet who led the transcendentalist movement. He wrote about life, love, beauty, and purpose. He believed that one should live life for themselves and not be a slave to the past. He encouraged people to take risks and be brave to pursue their dreams. He also believed in the power of nature, and encouraged others to find beauty in it. He wanted people to be courageous, enthusiastic, kind, and to leave the world a better place. Emerson also said that it is easy to live for others, but it is difficult to live for oneself, and that is the greatest accomplishment.

    o “The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.It is not the length of life, but the depth that matters.

    o We are always getting ready to live, but never living. What lies behind you and what lies in front of you, pales in comparison to what lies inside of you. Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true.

    o It is easy to live for others, everybody does. I call on you to live for yourself. To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment. Don’t waste yourself in rejection, nor bark against the bad, but chant the beauty of the good. Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood. Be yourself; no base imitator of another, but yourbest self. There is something which you can do better than another. Listen to the inward voice and bravely obey that. Do the things at which you are great, not what you were never made for.

    o Be not the slave of your own past. Plunge into the sublime seas, dive deep and swim far, so you shall come back with self-respect, with new power, with an advanced experience that shall explain and overlook the old. Happiness is a perfume you cannot pour on others without getting some on yourself. Make your own Bible. Select and collect all the words and sentences that in all your readings have been to you like the blast of a trumpet. Whatever you do, you need courage. Whatever course you decide upon, there is always someone to tell you that you are wrong. There are always difficulties arising that tempt you to believe your critics are right. To map out a course of action and follow it to an end requires some of the same courage that a soldier needs. Peace has its victories, but it takes brave men and women to win them.

    o To laugh often and much; to win the respect of intelligent people and the affection of children; to earn the appreciation of honest critics and to endure the betrayal of false friends. To appreciate beauty; to find the best in others; to leave the world a bit better whether by a healthy child, a garden patch, or a redeemed social condition; to know that even one life has breathed easier because you have lived.This is to have succeeded!Without ambition one starts nothing. Without work one finishes nothing. The prize will not be sent to you. You have to win it. Shallow men believe in luck or in circ*mstance. Strong men believe in cause and effect. Write it on your heart that every day is the best day in the year. He is rich who owns the day, and no one owns the day who allows it to be invaded with fret and anxiety. Nothing great was ever achieved without enthusiasm. A hero is no braver than an ordinary man, but he is brave five minutes longer.

    o What I must do, is all that concerns me, not what the people think. This rule, equally arduous in actual and in intellectual life, may serve for the whole distinction between greatness and meanness. It is the harder, because you will always find those who think they know what is your duty better than you know it. It is easy in the world to live after the world’s opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.

    o He who is in love is wise and is becoming wiser, sees newly every time he looks at the object beloved, drawing from it with his eyes and his mind those virtues which it possesses.

    o Though we travel the world over to find the beautiful, we must carry it with us or we find it not. Never lose an opportunity of seeing anything beautiful, for beauty is God’s handwriting.

    This is my wish for you: Comfort on difficult days, smiles when sadness intrudes, rainbows to follow the clouds, laughter to kiss your lips, sunsets to warm your heart, hugs when spirits sag, beauty for your eyes to see, friendships to brighten your being, faith so that you can believe, confidence for when you doubt, courage to know yourself, patience to accept the truth, Love to complete your life.

  • Automating creativity - by Ethan Mollick - One Useful Thing

    AI Summary: AI is now able to beat humans on creativity tests, and three recent experimental papers have demonstrated that AI can generate creative ideas in real-world situations. The papers found that AI-generated ideas were more feasible and impactful than human-generated ideas, and AI-assisted humans created stories that were judged as significantly more novel and interesting. Humans still have a large role to play in innovation, but should include AI in the process due to its creative ability. The AI works best when given fewer, more specific prompts and constraints, and people should experiment to find what works best. AI acts as a powerful creative engine, and its use now gives many people access to good ideas that used to only be available to a few.

    • AI can generate creative ideas in real-life, practical situations. It can also help people generate better ideas.

    • The ideas AI generates are better than what most people can come up with, but very creative people will beat the AI (at least for now), and may benefit less from using AI to generate ideas

    • There is more underlying similarity in the ideas that the current generation of AIs produce than among ideas generated by a large number of humans

    • We still don’t know how original AIs actually can be, and I often see people argue that LLMs cannot generate any new ideas. To me, it is increasingly clear that this is not true, at least in a practical, rather than philosophical, sense. In the real world, most new ideas do not come from the ether; they are based on combinations existing concepts, which is why innovation scholars have long pointed to the importance of recombination in generating ideas. And LLMs are very good at this, acting as connection machines between unexpected concepts. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper connections. Add in the randomness that comes with AI output, and the result is, effectively, a powerful creative ability.

      Where previously, there were only a few people who had the ability to come up with good ideas, now there are many. This is an astonishing change in the landscape of human creativity, and one that likely makes execution, not raw creativity, a more distinguishing factor for future innovations.

  • A.I. and climate change: Why A.I. tools like ChatGPT use so much energy. (slate.com)

    o There’s a big reason why every company hoping to deal in some way with artificial intelligence is either spending or raising billions upon billions of dollars right now, and it’s not justinvestor hype. These stacks of cash arenecessaryfor meeting the costs of building, training, and maintaining resource-intensive (and resource-lacking) power-sucking content generators like ChatGPT, as well as the resource-intensive power-sucking data sets, neural networks, and large language models, or LLMs, they’re trained on—such as OpenAI’s GPT-4, whose API wasrecently made public to paying customerswith coding expertise.

    o The collectivedemand for GPUshas escalated such that Nvidia will besold out of treasured units like its H100through the rest of the year. In the meantime, some cryptocurrency enthusiasts arerepurposing their power-draining mining machinesfor the A.I. training cause, and Google is banking on its TPUs (tensor processing units, which were invented by Google specifically to handle compute requirements for machine learning tech).

    o “We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase,” including GPUs and TPUs. And, as stands to reason, advanced A.I. models not only used hundreds of such units, but they also employedhigher-performingversions of these models.

    o The stuff that allows ChatGPT to pen inadmissible legal briefs and error-laden blog posts in mere seconds uses alotof hardware that transmits alotof electricity. And if these tools are any good at the moment, it’s because the data sets on which they’re trained areonly getting bigger and bigger—and the physical infrastructure that they run on has to bulk and scale up as well.

    o This is key to understanding why the A.I. sector looks the way it does: It’s primarilycontrolled by Big Tech corporationsthat own varied and ample resources, dependent onlarge and consistent cash influxes, hopeful about long-evasive moonshots in fields likequantum computingandnuclear fusion, dismissive ofsmaller competitorswho can’t hope to catch up to the bigger firms’ staggering advancements, secretive about the technical factors behind its energy inputs.

    o Neural networks have been around for a while, but what makes the Transformer unique is that, per Google, when it comes to detecting language patterns and contexts, it “requires less computation to train” than previous types of neural networks. You could feed a Transformer alotmore information than prior neural models by feeding in units of data known as “tokens”—which the network can process, understand, and memorize in an economical manner—while using much less energy, time, and money than what less-slick neural networks may require. This is why current A.I. models have better predictive and generative capabilities: Many are now trained onhundreds of billions of these tokens, which thus establishes billions of “parameters,” aka the “synapses” of neural networks (more on that later).

    o The “Generative Pre-trained” innovation is what OpenAI had added to Google’s invention by 2018. “Pre-trained” refers to how the OpenAI Transformer has been fed a particular data set—in GPT models’ instance, troves oftext scraped from books and webpages—that the system processes in order to establish itself as “learned” in various language patterns and contexts, denoted in parameters. “Generative” refers to these models’ capability to, naturally, generate text that’s (often) legible and (sometimes) sensible, based on what they’ve been pre-trained on through the Transformer.

    o Every part of this process requires ample energy. Some academics, when discussing the carbon footprint of A.I., focus onallparts of developing the tech—everything from sourcing the materials required to shipping them through supply chains to the flights that individual A.I. researchers take to collaborate with one another or gather at conferences. But, for our purposes, let’s keep it simple and focus on the process that occurs from text system training to final output, as tested and deployed in a laboratory that has the pieces assembled and ready.

    o First, the data. In A.I., much text data is scraped online from various websites (like Slate) in abulk-collection methodthat often spikes the number of requests sent to a given site and can overwhelm its servers—in effect, outsourcing the energy usage to themillions of sites being scraped. The scraped data must be stored somewhere; Microsoft and other companies delving further into A.I. are constructing bigger “hyperscale” data-center campuses, often inbig citiesor in European regions withcolder weather, the latter providing the advantage of naturally moderating the operational temperatures of these data centers.

    o The tech analysis firm Tirias Research estimates that global data-center power consumption couldincrease by 21,200 percent in five years, running up operational costs in excess of $76 billion (in today’s dollars). To meet this skyrocketing energy demand in a sustainable manner, we’re gonna need alotmore renewable energy.

    o As researchers from Meta and from academia noted in apaper from May, “large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences.” In other words: There’s the first step of shoveling in mounds of data that the model grows and learns from, andthenthere’s the question of further tweaking the model after the first “pre-training” is complete.

    o This includes refining and expanding the model after the fact, through processes likefine-tuningandreinforcement learning from human feedback, or RLHF. The former refers to the technical practice of adding more real-word-example data for the LLM’s benefit, so that it establishes a wider berth of knowledge without starting training all over again; RLHF is the means by which a human content trainer assists training, whether by grading certain bits of output or feeding refined data that will (hopefully) help to produce a desired result.

    o Fine-tuning takes place at the research and development end, but RLHF has more reach: It’s themasses of underpaid workerslabeling bits of data to make it easier for the computer to learnfactualthings, and us humans telling ChatGPT why its summary of energy history was wrong, wrong, wrong. In fact, much of the reason ChatGPT existed in the first place was so thatOpenAI could hasten the improvement of the modelit was working on—in the chatbot’s case, GPT-3—and take it to the next level.

    o But when it comes to making ChatGPT more competent, drawing on willing volunteer trainers isn’t an automatic cost-cutter. Unlike fine-tuning, which directly futzes with the mechanics of a neutral network, 100 million users doing RLHF means that the model is also being simultaneouslydeployedfor use—it’s being applied to the real world, through an action known as “inference.”

    o Per SemiAnalysis’ own calculations, “ChatGPT costs $694,444 per day to operate in compute hardware costs,” equating to about 36 cents per interaction

    o All of that is on top of the cost it took merely toprepareChatGPT as you know it. According toA.I. analyst Elliot Turner, the compute cost for the initial training run alone probably summed up to $12 million—200 times the cost of training GPT-2, which only had 1.5 billion parameters. In early 2021, researchers from Google and the University of California–Berkeley estimated that merelytrainingGPT-3 consumed up to 1,287 megawatt-hours of electricity, enough to power about 360 homes for a year—all before you get into the inference. And this is all fortextoutput, mind you; theenergy and emissions tolls go way up when you get into image and video generation.

    o There’s a big reason why every company hoping to deal in some way with artificial intelligence is either spending or raising billions upon billions of dollars right now, and it’s not justinvestor hype. These stacks of cash arenecessaryfor meeting the costs of building, training, and maintaining resource-intensive (and resource-lacking) power-sucking content generators like ChatGPT, as well as the resource-intensive power-sucking data sets, neural networks, and large language models, or LLMs, they’re trained on—such as OpenAI’s GPT-4, whose API wasrecently made public to paying customerswith coding expertise.

    o The collectivedemand for GPUshas escalated such that Nvidia will besold out of treasured units like its H100through the rest of the year. In the meantime, some cryptocurrency enthusiasts arerepurposing their power-draining mining machinesfor the A.I. training cause, and Google is banking on its TPUs (tensor processing units, which were invented by Google specifically to handle compute requirements for machine learning tech).

    o “We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase,” including GPUs and TPUs. And, as stands to reason, advanced A.I. models not only used hundreds of such units, but they also employedhigher-performingversions of these models.

    o The stuff that allows ChatGPT to pen inadmissible legal briefs and error-laden blog posts in mere seconds uses alotof hardware that transmits alotof electricity. And if these tools are any good at the moment, it’s because the data sets on which they’re trained areonly getting bigger and bigger—and the physical infrastructure that they run on has to bulk and scale up as well.

    o This is key to understanding why the A.I. sector looks the way it does: It’s primarilycontrolled by Big Tech corporationsthat own varied and ample resources, dependent onlarge and consistent cash influxes, hopeful about long-evasive moonshots in fields likequantum computingandnuclear fusion, dismissive ofsmaller competitorswho can’t hope to catch up to the bigger firms’ staggering advancements, secretive about the technical factors behind its energy inputs.

    o Neural networks have been around for a while, but what makes the Transformer unique is that, per Google, when it comes to detecting language patterns and contexts, it “requires less computation to train” than previous types of neural networks. You could feed a Transformer alotmore information than prior neural models by feeding in units of data known as “tokens”—which the network can process, understand, and memorize in an economical manner—while using much less energy, time, and money than what less-slick neural networks may require. This is why current A.I. models have better predictive and generative capabilities: Many are now trained onhundreds of billions of these tokens, which thus establishes billions of “parameters,” aka the “synapses” of neural networks (more on that later).

    o The “Generative Pre-trained” innovation is what OpenAI had added to Google’s invention by 2018. “Pre-trained” refers to how the OpenAI Transformer has been fed a particular data set—in GPT models’ instance, troves oftext scraped from books and webpages—that the system processes in order to establish itself as “learned” in various language patterns and contexts, denoted in parameters. “Generative” refers to these models’ capability to, naturally, generate text that’s (often) legible and (sometimes) sensible, based on what they’ve been pre-trained on through the Transformer.

    o Every part of this process requires ample energy. Some academics, when discussing the carbon footprint of A.I., focus onallparts of developing the tech—everything from sourcing the materials required to shipping them through supply chains to the flights that individual A.I. researchers take to collaborate with one another or gather at conferences. But, for our purposes, let’s keep it simple and focus on the process that occurs from text system training to final output, as tested and deployed in a laboratory that has the pieces assembled and ready.

    o First, the data. In A.I., much text data is scraped online from various websites (like Slate) in abulk-collection methodthat often spikes the number of requests sent to a given site and can overwhelm its servers—in effect, outsourcing the energy usage to themillions of sites being scraped. The scraped data must be stored somewhere; Microsoft and other companies delving further into A.I. are constructing bigger “hyperscale” data-center campuses, often inbig citiesor in European regions withcolder weather, the latter providing the advantage of naturally moderating the operational temperatures of these data centers.

    o The tech analysis firm Tirias Research estimates that global data-center power consumption couldincrease by 21,200 percent in five years, running up operational costs in excess of $76 billion (in today’s dollars). To meet this skyrocketing energy demand in a sustainable manner, we’re gonna need alotmore renewable energy.

    o As researchers from Meta and from academia noted in apaper from May, “large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences.” In other words: There’s the first step of shoveling in mounds of data that the model grows and learns from, andthenthere’s the question of further tweaking the model after the first “pre-training” is complete.

    o This includes refining and expanding the model after the fact, through processes likefine-tuningandreinforcement learning from human feedback, or RLHF. The former refers to the technical practice of adding more real-word-example data for the LLM’s benefit, so that it establishes a wider berth of knowledge without starting training all over again; RLHF is the means by which a human content trainer assists training, whether by grading certain bits of output or feeding refined data that will (hopefully) help to produce a desired result.

    o Fine-tuning takes place at the research and development end, but RLHF has more reach: It’s themasses of underpaid workerslabeling bits of data to make it easier for the computer to learnfactualthings, and us humans telling ChatGPT why its summary of energy history was wrong, wrong, wrong. In fact, much of the reason ChatGPT existed in the first place was so thatOpenAI could hasten the improvement of the modelit was working on—in the chatbot’s case, GPT-3—and take it to the next level.

    o But when it comes to making ChatGPT more competent, drawing on willing volunteer trainers isn’t an automatic cost-cutter. Unlike fine-tuning, which directly futzes with the mechanics of a neutral network, 100 million users doing RLHF means that the model is also being simultaneouslydeployedfor use—it’s being applied to the real world, through an action known as “inference.”

    o Per SemiAnalysis’ own calculations, “ChatGPT costs $694,444 per day to operate in compute hardware costs,” equating to about 36 cents per interaction

    o All of that is on top of the cost it took merely toprepareChatGPT as you know it. According toA.I. analyst Elliot Turner, the compute cost for the initial training run alone probably summed up to $12 million—200 times the cost of training GPT-2, which only had 1.5 billion parameters. In early 2021, researchers from Google and the University of California–Berkeley estimated that merelytrainingGPT-3 consumed up to 1,287 megawatt-hours of electricity, enough to power about 360 homes for a year—all before you get into the inference. And this is all fortextoutput, mind you; theenergy and emissions tolls go way up when you get into image and video generation.

  • Cash and bonds are different investments (monevator.com)

    o Bonds are not the same as cash. Confusing the two is a bit like mixing up a bicycle with a unicycle. Yes, both have wheels. But one will give you a much smoother ride than the other.

    o “I wouldn’t necessarily say cash either, cash has not generated such good returns as fixed interest over the very long term, so you’re better off probably suffering a liquidity risk with fixed interest investments, rather than cash.”

    o Cashhas several key attributes:

    o Cash is theleast riskyasset class.Cash is king!

    o Cashdoesn’t fluctuatein value (exceptversus other currencies).

    o Cashpays a varying rate of incomethat shifts with market interest rates, competition between banks, and so on.

    o Cash isextremely liquid. You can typically transfer it from one person to another without any trading costs instantly. And you can withdraw it from a current account on demand.

    o There are special protections for cash savings accounts for consumers. See theFinancial Services Compensation Scheme.

    o Very long-term returns from cash are poor – in the UK only about 1% ahead of inflation.

    o Turning tobonds:

    o Bonds fluctuate in value– the price of a bond goes up and down between its issue and its eventual redemption.This makes bonds riskier.(See my old piece onwhat causes bond prices to vary).

    o Bonds can default which also makes them riskier than cash. Highly-rated UK government bonds are assumed to be risk-free (because the government can always print more money) but they are still riskier than cash, which has no default risk. Corporate bonds are much riskier than cash.

    o Bonds are less liquid than cash. You’ll need to buy and sell your bonds via a broker, who will charge a fee.

    o Bonds pay a fixed interest rate(usually).

    o Bondsrepay their par valueon redemption (unless they default, and without getting into the complications oflinkers).

    o With government bonds your protection comes down to the ability of the issuing government to meet its obligations. (And separately, anyinvestor protectionsthat apply to the platform you’re holding the bonds on.)

    o Very long-term returns (50 to 100 years, say) from bonds are better than cash, buttiming plays a partover the short to medium term.

    o Similarly, if you want to mix-up the non-equity holding part of your portfolio then diversifyingbeyond bondsinto cash (or vice versa) is a logical first step.

    o But similar is not the same as identical. And as soon as you add any meaningful duration to the bonds in question, the differences become pretty clear.

    o Both cash and bonds are valuable assets precisely because they can play different roles in your portfolio. (Yes,even after the bond routof the past 18 months.)

    o Cash and bonds are not the same.

  • #63 Marginal Gains - Part 3 (1)
  • #63 Marginal Gains - Part 3 (2)
  • #63 Marginal Gains - Part 3 (3)
  • #63 Marginal Gains - Part 3 (2024)
    Top Articles
    Latest Posts
    Article information

    Author: Moshe Kshlerin

    Last Updated:

    Views: 5668

    Rating: 4.7 / 5 (77 voted)

    Reviews: 84% of readers found this page helpful

    Author information

    Name: Moshe Kshlerin

    Birthday: 1994-01-25

    Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

    Phone: +2424755286529

    Job: District Education Designer

    Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

    Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.