Anonymous ID: a7636b Feb. 11, 2026, 4:20 p.m. No.24247192   🗄️.is 🔗kun   >>7367 >>7660 >>7891 >>7933

BRIDGE OVER TROUBLED WATER: PM Carney Reacts After Trump Vows To Block the Opening of New Bridge Between Canada and the US

 

A bridge that doesn’t unite, but separates the two countries.

 

The troubled bilateral relations between the US and Canada have a new point of contention, as Donald J. Trump has threatened to block the opening of a controversial bridge connecting the two countries.

 

Yesterday (10), Canadian Prime Minister Mark Carney said that ‘he looks forward to the opening’ of the multibillion-dollar project.

 

ABC News reported:

 

“In a social media post on Monday, Trump said, ‘I will not allow this bridge to open until the United States is fully compensated for everything we have given them, and also, importantly, Canada treats the United States with the Fairness and Respect that we deserve.

 

Trump did not mention the bridge by name, but appears to be referring to the Gordie Howe International Bridge. The six-lane project is set to connect Windsor, Ontario, and Detroit, Michigan. Trump said that he plans to ‘start negotiations, IMMEDIATELY’, seeming to refer to a deal on the bridge, and repeated his ongoing criticism of Canada since he began a trade war with America’s northern neighbor.”

 

“‘With all that we have given them, we should own, perhaps, at least one half of this asset. The revenues generated because of the U.S. Market will be astronomical’, Trump said in the post.”

 

The bridge has been under construction since 2018, and expected to open early this year.

 

“’I explained that Canada paid for the construction of the bridge — more than $4 billion — and that ownership is shared between the state of Michigan and the Government of Canada’, Carney told reporters.”

 

One Canadian’s response to Carney is hilariousTweet attached.

 

Tokyo Rosie

@tokyorosiecr

Mark Carney pretty much fucked Canada in ways that getting fucked have never been explored before.

 

Looking for new trading opportunities is finebut do you have to blow the bridge up to our biggest trading partner in the process? What a moron!

 

 

https://www.thegatewaypundit.com/2026/02/bridge-troubled-water-pm-carney-reacts-after-trump/

Anonymous ID: a7636b Feb. 11, 2026, 4:40 p.m. No.24247305   🗄️.is 🔗kun   >>7311 >>7367 >>7653 >>7660 >>7891 >>7933

Scott Jennings

@ScottJenningsKY

 

Democrats are LOSING the narrative on voter ID.

 

83% of Americans want it. Non-white voters, white voters, men, women — they all support it.

 

So why are Dems in a full-blown panic about it? The answer seems pretty obvious at this point.

 

https://x.com/ScottJenningsKY/status/2020966616788500588?s=20

Anonymous ID: a7636b Feb. 11, 2026, 5:18 p.m. No.24247459   🗄️.is 🔗kun   >>7527 >>7570 >>7660 >>7891 >>7933

Anons go and collect this, it’s too long to post on the board. I might be important1/2but much longer than 2

Something Big Is Happening

By Matt Shumer • Feb 9, 2026

Think back to February 2020

 

If you were paying close attention,you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they werestockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, _over the course of about three weeks, the entire world changed__. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

 

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.

 

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't… my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

 

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.

 

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.

 

I know this is real because it happened to me first

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

 

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

 

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked.Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.

 

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

 

https://shumer.dev/something-big-is-happening

Anonymous ID: a7636b Feb. 11, 2026, 5:27 p.m. No.24247527   🗄️.is 🔗kun   >>7660 >>7891 >>7933

>>24247459

2/2

Let me give you an example soyou can understand what this actually looks like in practice. I'll tell the AI:"I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code.Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would.If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.

I'm not exaggerating. That is what my Monday looked like this week.

But it wasthe model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions.It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste.The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

I've always beenearly to adopt AI tools. But the last few months have shocked me.These new AImodels aren't incremental improvements. This is a different thing entirely.

And here's why this matters to you, even if you don't work in tech.

The AI labs made a deliberate choice. ___They focused on making AI great at writing code first… because building AI requires a lot of code__. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers… it was just a side effect of where they chose to aim first.

They've now done it.And they're moving on to everything else….

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

"But I tried AI and it wasn't that good"

I hear this constantly. I understand it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today areunrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to.

 

https://shumer.dev/something-big-is-happening