Our Services

Expert Software Engineering & I.T. in South Florida Since 1993

Financial Technology

Payment processing, POS systems, and banking applications.
Cryptocurrency, tokens, CBDC, and NFTs.
Smart contracts and distributed ledger integration.
Built to the regulatory bar that serious financial workloads demand.

Custom Software Development

Custom web, desktop, and mobile applications.
SaaS platforms, APIs, and firmware.
From the first wireframe to the production deploy.
Built to spec, owned by you — not licensed back to you.

AI & Emerging Technology

Artificial Intelligence (A.I.) and machine learning deployments.
Internet of Things (IoT), blockchain, and VR/AR.
Robotic process automation and workflow intelligence.
Pragmatic integrations that fit into real operations — not science projects.

Digital Media & Marketing

Search engine optimization for regional and national queries.
Content-first distribution that compounds.
Social media strategy, publishing, and performance measurement.
Tied to conversions — not vanity metrics.

IT Solutions & Consulting

Full-stack Information Technology (I.T.) consulting.
Cybersecurity audits and hardening.
Digital transformation and office systems implementation.
Smart building and facilities integration.
Infrastructure built for what’s next, not just what’s here.

IT Hardware & Infrastructure

Information Technology (I.T.) infrastructure — servers, colocation, and network connectivity (fiber, LTE, 5G).
Security camera systems and physical site monitoring.
Solar installations and energy infrastructure.
Blockchain mining hardware.
Spec'd, deployed, and supported by the SBLOCK team.

Testimonials

Happy clients

"SBLOCK successfully brought our company into the 21st century. I didn't realize how much effort and resources we were wasting. Thank you!"

Michelle Chan
Michelle Chan Procurement Manager calibamboo.com

"A Website and Android application at a fraction of the cost of others. We still can't figure out how you did it, but we appreciate it! Well done!"

Jude Phillips
Jude Phillips CEO / Owner GilChristAutomotive.com

"Without SBLOCK we would never have gotten this project off the ground. You really know your stuff. "

Mathew McCall
Mathew McCall Sales Manager

Recent Updates

Our latest news

What Does Meta Know? Inside Their Post-Quantum Cryptography Migration

On April 16, 2026, Meta’s engineering team published the framework they have been using to migrate their infrastructure to post-quantum cryptography (PQC). Not a preview. Not a research paper. An operational playbook, complete with algorithm choices, a five-level maturity model, and lessons from the first phase of rollout.

That raises the question you are probably asking: what does Meta know that the rest of us do not?

The honest answer, after a week of digging, is that Meta is not ahead. Apple shipped post-quantum iMessage (PQ3) in early 2024. Signal was earlier still. Cloudflare has over 50% of human HTTPS traffic running hybrid post-quantum TLS as of late 2025. What Meta did do is publish the internal playbook the rest of Big Tech kept private. Inside that playbook is a quieter, more important signal. The quantum timeline just got shorter, and the cryptographic migration that security teams were told to start “sometime this decade” is already happening inside every platform you use.

If you run infrastructure for a South Florida business in a regulated vertical (banking, fintech, healthcare, insurance, legal), this is the memo to read.

What Meta Actually Shipped

The short version: Meta has moved a significant portion of its internal server-to-server traffic onto hybrid post-quantum TLS using a combination of NIST’s new ML-KEM standard (FIPS 203) and the classical X25519 key exchange. They are doing this through their in-house TLS library, Fizz, the same library that underpins WhatsApp’s messaging transport.

Three things in the playbook are worth calling out.

The algorithm choice. Meta defaulted to ML-KEM-768 (NIST Security Level 3) as the post-quantum half of the handshake, paired with X25519 as the classical half. Both sides of the handshake must be broken for a session to be exposed, so a future attacker needs both a working quantum computer and a break against elliptic-curve crypto. This hybrid design is the industry’s de-facto answer to the question “what if lattice cryptography turns out to have a hidden flaw?”. It refuses to bet the house on either side.

The real problem was not the math. Meta’s engineers flagged that the hardest part of the migration was not choosing an algorithm. It was building a cryptographic inventory: figuring out where, across thousands of services, ciphers were actually in use. “You cannot migrate what you cannot see” is the summary. This echoes what the U.S. federal PQC transition has run into. A NIST survey from early 2025 found only about 7% of federal agencies have a formal transition plan with a dedicated team.

They maintain a backup plan. Meta cryptographers are listed as co-authors on HQC, a code-based key encapsulation mechanism NIST selected in March 2025 as a non-lattice backup to ML-KEM. Meta is not only deploying the standard. They are investing in the hedge in case the standard turns out to have a structural weakness.

That is the playbook. It is practical, conservative, and (crucially) not unusual. Apple, Google, AWS, Cloudflare, and Signal are all doing variations of the same thing.

So Does Anyone Actually Have a Working Quantum Computer?

Your instinct is right. As of April 2026, no one has a cryptographically relevant quantum computer, the kind that could actually break RSA-2048 in any useful time. What exists is a frantic hardware race across at least seven different technology bets.

⚛️ Superconducting qubits (IBM, Google, Rigetti, China USTC). IBM Condor sits at 1,121 physical qubits with a 2026 roadmap target of 4,158 qubits across linked modules. Google’s Willow (105 qubits) achieved the first below-threshold error correction in late 2024, meaning adding qubits actually reduces the logical error rate. Rigetti launched the 108-qubit Cepheus-1-108Q earlier this month. China’s USTC runs Zuchongzhi 3.0 at 105 qubits and has opened an 880-qubit cluster for commercial access.

🔗 Trapped-ion qubits (Quantinuum, IonQ). Quantinuum demonstrated 94 logical qubits in March 2026 with a logical error rate around 10⁻⁴. IonQ publicly claims they will have a cryptographically relevant system by 2028, a claim that is aggressive and treated skeptically by most cryptographers.

⚪ Neutral-atom qubits (Microsoft + Atom Computing, QuEra). Microsoft’s “Magne” system (50 logical / 1,200 physical qubits) targets early 2027. QuEra delivered a commercial-scale system to Japan’s AIST late last year. Both camps aim at 100,000 atoms per vacuum chamber within a few years.

💡 Photonic qubits (PsiQuantum, China Jiuzhang). PsiQuantum is skipping the NISQ era entirely and aiming directly at fault tolerance. Jiuzhang 3.0 demonstrates photonic sampling, which is a specialized advantage claim rather than a general-purpose machine.

🌀 Topological qubits (Microsoft Majorana 1). Microsoft announced the world’s first topological-qubit QPU in February 2025, targeting a million qubits on a single chip. The underlying physics is still contested in peer review, so treat this as a wild card rather than a shipped product.

🏭 Silicon-spin (CMOS) qubits (Intel). See the next section. This one deserves more space because it is the bet with the longest timeline and the most manufacturing muscle behind it.

Each of these is a “working” quantum computer in the narrow sense that they run quantum circuits and produce results. None of them can run Shor’s algorithm against the keys protecting your online banking session. Yet.

What is Intel Doing?

You asked specifically. The answer is interesting: Intel is playing a completely different game from IBM and Google.

Intel’s quantum team produced the Tunnel Falls chip in 2023, a 12-qubit silicon spin-qubit device manufactured on a standard 300mm wafer at Intel’s D1 fab, the same production line that makes CPUs. Yield was reportedly 95%, with over 24,000 quantum dots per wafer. By 2026, Intel’s silicon spin qubits have demonstrated single-qubit fidelities around 99.9% and coherence times of hundreds of microseconds to over a millisecond in isotopically purified silicon.

Twelve qubits sounds tiny next to IBM’s 1,121. It is tiny.

But Intel’s bet is long-dated and unusual: when a cryptographically relevant quantum computer finally arrives, it will need not thousands but millions of qubits. At that scale, the industry that knows how to pattern billions of near-identical features on silicon wafers has a manufacturing advantage that no exotic trapped-ion lab can match. Intel is not racing to have the biggest NISQ demo this year. They are positioning to mass-produce the qubits that everyone else will need to buy when fault-tolerant systems scale up.

Whether that bet pays off is an open question. But if you were writing a check on quantum-hardware futures, “Intel is behind” and “Intel is positioned to dominate manufacturing” can both be true at the same time.

The Signal Buried in the Noise

Here is the piece most of the coverage missed. In May 2025, a Google researcher named Craig Gidney published a paper estimating that breaking RSA-2048 would require fewer than one million noisy qubits running for under a week. His previous estimate, from 2019, was 20 million qubits running for 8 hours. That is a 20× reduction in six years, and the research community believes the number is still falling.

Combine three facts:

  • NIST finalized FIPS 203, 204, and 205 in August 2024. The standards are set and compiler-ready.
  • Gidney 2025 cut the quantum-resource requirement by an order of magnitude.
  • Every major platform (Apple, Google, Cloudflare, AWS, Signal, Meta) has shipped or is shipping PQC in production right now.

That is the “what does Meta know” story. Not that Meta has spies at Los Alamos. The people who build the internet’s crypto infrastructure watched the threat estimate collapse in half between 2019 and 2025, looked at how long real migrations actually take, and decided they had exactly enough runway.

The U.S. government’s Grand Challenge, announced this month, targets the first fault-tolerant quantum computer by 2028. Note the year. Not 2035. Not “sometime in the 2030s.” 2028.

What Your Business Should Actually Do

If you are running IT for a regulated South Florida business (we covered the practical angle on this in our earlier look at federally regulated stablecoins and how compliance drives infrastructure choices), three things are worth doing this quarter. None of them are expensive.

1. Build a cryptographic inventory. This is the step Meta said was the hardest and the one nobody wants to do. You cannot migrate what you cannot see. List every TLS endpoint, every VPN, every code-signing certificate, every database-at-rest encryption configuration. That is the boring, necessary first step.

2. Test harvest-now-decrypt-later exposure. The realistic quantum threat is not someone breaking your TLS session live. It is someone recording your encrypted traffic today and decrypting it in 2030. Any data that must stay confidential past 2030 (contracts, medical records, financial history, trade secrets) is already at risk today.

3. Follow Meta’s playbook, not their timeline. Hybrid PQ (ML-KEM + X25519) is the right default for anything new you deploy. OpenSSL 3.5 supports it natively. Most major cloud providers (AWS, Google Cloud, Cloudflare) have hybrid PQ TLS available right now. You do not need to wait for standards to mature. They are mature.

You do not need to panic. You do not need to rip and replace. But if you are the person in your organization responsible for “will our encrypted data stay encrypted in 2032?”, the answer depends on the work you start this year.

AI is also reshaping the research pipeline that feeds this stuff. We covered Claude’s recent work on a Knuth-era combinatorics problem as one example of how quickly the tools for attacking hard math problems are improving. The same kind of progress is happening in quantum hardware design.

Meta just handed you the playbook. It is not secret knowledge. It is a diligent engineering team saying, out loud, that the timeline they built against has tightened, and the work is neither exotic nor optional.

Need help auditing your organization’s cryptographic inventory or planning a post-quantum migration roadmap? Get in touch with SBLOCK.

Sources

Primary references:

Quantum hardware status:

Industry adoption and policy:

Claude Opus 4.7 Just Launched. Here’s What It Actually Changes for Business.

Anthropic released Claude Opus 4.7 today. If you’re running a business that depends on software, handles documents, or is evaluating AI tools — this one matters. Not because it’s the flashiest launch of the year, but because of what specifically improved and who it’s built for.

What Actually Changed

Claude Opus 4.7 is a direct upgrade to the Opus 4.6 model that powered the Knuth breakthrough we covered last month. The improvements are targeted, not cosmetic.

Software engineering got meaningfully better. Opus 4.7 scored +13% on a 93-task coding benchmark compared to its predecessor, and resolved 3x more production-level tasks on Rakuten-SWE-Bench. On CursorBench — which measures real developer workflows — it hit 70%, up from 58%. These aren’t toy benchmarks. They’re measuring whether the model can actually ship code.

It’s dramatically more efficient. In enterprise evaluations by Box, Opus 4.7 used 56% fewer model calls, 50% fewer tool calls, responded 24% faster, and consumed 30% fewer AI Units than the previous version. That translates directly to lower API costs for businesses running Claude at scale.

Document analysis improved substantially. On Databricks’ OfficeQA Pro benchmark, Opus 4.7 made 21% fewer errors when working with source documents — financial reports, contracts, technical specifications. For any business that processes paperwork, that’s a measurable reduction in mistakes.

Vision got a 3x resolution upgrade. The model now processes images at more than three times the resolution of Opus 4.6. Charts, dense documents, screen UIs, and slide decks are all handled with significantly higher accuracy. If you’ve ever pasted a screenshot into an AI chat and gotten a vague response, this is the fix.

Long-running tasks stay on track. Opus 4.7 delivered the most consistent long-context performance of any model tested, tying for the top overall score across six evaluation modules. For businesses running multi-step workflows — research, analysis, code generation, reporting — the model no longer drifts off course halfway through.

Why This Matters Beyond the Benchmarks

The numbers are strong, but the real story is about what kind of company Anthropic is becoming — and what that signals for businesses evaluating AI vendors.

Anthropic now has over 1,000 enterprise customers paying more than $1 million annually for Claude services. Their annual recurring revenue has hit $30 billion, and analysts project it could triple by year-end. Claude’s share of chatbot traffic nearly doubled between February and March 2026. This isn’t a research lab anymore. It’s a platform company with serious enterprise traction.

The UK government is using Claude to power GOV.UK, the country’s main public information portal. The British government is actively courting Anthropic for further expansion, including a potential dual stock market listing. When a G7 government selects your AI for citizen-facing services, that’s a credibility signal that matters.

Opus 4.7 is available everywhere businesses already deploy. It launched simultaneously on the Claude API, Amazon Bedrock, GitHub Copilot, Google Cloud, and Microsoft Azure. If you’re on any of those platforms, the upgrade is a configuration change — not a migration.

The Elephant in the Room: Mythos

CNBC reported today that Anthropic describes Opus 4.7 as their most powerful generally available model — but positions it as “less broadly capable” than Claude Mythos Preview, their unreleased frontier model. That distinction matters.

Mythos is the ceiling. Opus 4.7 is the floor that businesses can actually build on today. And for most real-world applications — writing code, analyzing documents, automating workflows, processing images — the floor just got raised significantly.

What This Means for Your Business

If you’re already using Claude, this is a free upgrade. Opus 4.7 is a drop-in replacement for Opus 4.6 across every deployment channel. You get better results at lower cost without changing a single line of integration code.

If you’re evaluating AI tools and haven’t committed yet, the landscape just shifted. The efficiency gains alone — 56% fewer API calls, 24% faster responses — change the unit economics of AI-powered automation. Projects that didn’t pencil out at Opus 4.6 pricing might work now.

If you’re a software team, the coding improvements are the headline. A model that resolves 3x more production tasks and scores 70% on real developer workflow benchmarks isn’t an assistant anymore. It’s a junior engineer that works around the clock.

And if you’re in an industry that runs on documents — legal, financial services, insurance, healthcare — the 21% error reduction in document analysis is the number to focus on. That’s not a marginal improvement. That’s the difference between an AI tool you have to babysit and one you can trust.

So What’s the Move?

The businesses that gain the most from a model release like this aren’t the ones that rush to adopt. They’re the ones that have already mapped out where AI fits into their operations and can slot the upgrade into an existing workflow.

If you haven’t done that mapping yet, that’s where SBLOCK comes in. We advise on AI tool selection, integration architecture, and automation strategy — for software teams, operations teams, and leadership trying to figure out which of these capabilities actually matter for their specific business.

The model got better. The question is whether your business is set up to take advantage of it.

Request a Consultation

SBLOCK has been building with Claude since the early access days. We know what it’s good at, where the limits are, and how to integrate it into production systems that have to work every day — not just pass a benchmark.

Request a Consultation

Claude Solved a Math Problem Donald Knuth Couldn’t. He Published a Paper About It.

Claude Opus 4.6 cracked an open Hamiltonian cycle problem in a 3D directed graph that Knuth, author of The Art of Computer Programming, had been working on for weeks. Knuth’s response, in print: “Shock! Shock!”

Claude Solved a Math Problem Donald Knuth Couldn’t. He Published a Paper About It.
v1 v2 v3 v4 v5 v6 claude solved Graph edges Hamiltonian cycle AI-computed path

What Happened

In early March 2026, Donald Knuth published a paper. He titled it “Claude’s Cycles.”

Knuth wrote The Art of Computer Programming, created TeX, and is probably the most respected computer scientist still working. He’s 87, has been publishing serious mathematical work for six decades, and is not someone who reaches for hyperbole. The paper described his reaction to watching Anthropic’s Claude Opus 4.6 solve an open problem in graph theory he’d spent weeks on without getting anywhere.

Claude solved it.

Knuth’s paper was candid in a way that made it as interesting as the result itself. Reading a legendary scientist genuinely grappling with what just happened is not something you see often. The paper circulated fast through academic and tech communities and, somewhat improbably, pushed Claude to the number one spot on the U.S. App Store.

A math paper sent an AI app to the top of the charts. That’s not something that has happened before.

Timeline of Events

📅
Early March 2026

Anthropic launches Claude Sonnet 4.6 and Opus 4.6 with a 1M-token context window in beta and persistent memory for all users.

🧠
Claude Opus 4.6 Solves the Problem

Claude constructs a valid Hamiltonian cycle in Knuth’s 3D directed graph, a problem Knuth had been working on for weeks without finding a solution.

📄
Knuth Publishes “Claude’s Cycles”

Knuth formally documents what happened, calls the result “a dramatic advance in automatic deduction and creative problem solving,” and opens with “Shock! Shock!”

🌐
Global Reaction

The paper spreads through academic and tech circles. Claude hits number one on the U.S. App Store, driven entirely by the credibility of the source, not by any marketing push.

Who Donald Knuth Is and Why His Reaction Matters

If you don’t know Knuth, here’s the short version: he’s 87 years old, has been doing serious mathematical work for six decades, and is still actively publishing. His multi-volume series The Art of Computer Programming is called the bible of computer science. When he says he’s been working on something for weeks and hasn’t cracked it, that’s not a throwaway comment.

AI solving benchmark math problems isn’t news anymore. This is different. Knuth isn’t a benchmark. He’s a living legend who was working on a real open problem, and he put it in writing that a machine surprised him.

That shift matters. Not because “AI can pass the bar exam” or score well on some standardized test. Because the person who has thought harder and longer about computation than almost anyone alive handed a hard problem to a model and walked away genuinely shocked by what came back.

That’s a different kind of signal than a leaderboard score.

The Problem and Why It’s Commercially Significant

The Hamiltonian cycle problem asks you to find a path through every node in a graph, visiting each exactly once before returning to the start. In a 3D directed graph the structure is dense and the valid paths are hard to find. The difficulty scales fast, and the problem class has been studied for decades without yielding easily.

This isn’t a parlor trick. Graph theory and combinatorics are the mathematical foundation of several areas of applied computer science that matter commercially:

Network Routing

Efficient packet routing in communications networks is built on the same underlying math.

Semiconductor Design

Chip layout and circuit path optimization rely on graph traversal at scale.

Logistics Optimization

Vehicle routing, supply chain sequencing, and delivery scheduling are all variants of Hamiltonian-class problems.

A model that can work on open problems in this space isn’t just academically interesting. The math Claude demonstrated capability on sits underneath real infrastructure that companies spend serious money on.

What This Actually Means

The interesting part isn’t that an AI solved a hard problem. Models solve hard problems regularly now. The interesting part is that a researcher with 60 years of experience, staring at a specific unsolved case, reached for the AI tool. And the AI gave him something he didn’t see coming.

Knuth didn’t step back from the field when the tools changed. He used them. At 87, still publishing, still working on open problems, still willing to be surprised. The paper that came out of it probably did more for Claude’s public profile than any product launch Anthropic could have planned.

The pattern here is worth noting: the people and institutions that held out longest against taking AI seriously are now the ones getting moved fastest. Knuth’s reaction isn’t just a data point about one model and one problem. It’s a signal about where the whole thing is heading, coming from the person who has watched this field the most carefully for the longest time.

When the holdouts start moving, the pace usually picks up across the board.

Key Takeaways

  • Claude Opus 4.6 solved an open Hamiltonian cycle problem in a 3D directed graph that Donald Knuth, with 60 years of experience, had been unable to close.
  • Knuth published a formal paper on it titled “Claude’s Cycles,” calling the result “a dramatic advance in automatic deduction and creative problem solving.”
  • The paper is one of the most credible endorsements of AI mathematical reasoning on record, not a benchmark score but a peer assessment from the field’s elder statesman.
  • It went viral and pushed Claude to number one on the U.S. App Store with no marketing behind it, just the weight of the source.
  • The underlying math has direct commercial relevance in network routing, chip design, and logistics, so this isn’t purely academic.

 

AI Coding Assistants Compared: OpenClaw vs Goose for Software Development

AI coding assistants are reshaping how development teams ship software. At SBLOCK, we put two platforms to the test — OpenClaw by Peter Steinberger and Goose by Block — and discovered the biggest difference wasn’t technical at all.

What We Tested

Our team evaluated both AI coding assistants across three dimensions that matter most in day-to-day software development: context awareness, session management, and task execution behavior. We wanted to understand which tool actually fits into a real developer workflow — not just which one generates code faster.

openclaw
  • Deep context awareness — sees into databases, tracks across sessions and channels (Telegram, web)
  • Predictable execution — solves the problems you actually ask it to solve
  • Strong tool integration — seamless connection to existing development workflows
  • Cross-session memory — maintains context between conversations and platforms

Open ecosystem — community feedback, plugins, and documentation created a compound growth effect.

Goose
  • Scope limitations — difficulty seeing across sessions and channels
  • Runs ahead — sometimes tries to solve problems you didn’t ask about
  • Uncertain architecture — unclear if limitations are platform-inherent or implementation-specific
  • Isolated context — each session starts relatively fresh

Stayed internal at Block — no community, no ecosystem, no compound effect despite strong underlying tech.

Key Insight: The real difference between these AI developer tools wasn’t purely technical — it was visibility and ecosystem. Goose was kept internal. OpenClaw went open. The compound effect of community feedback, plugins, and documentation made the difference.

The Real Issue: Marketing, Not Architecture

When Block developed Goose, they kept it internal. It served their own software development lifecycle beautifully, but the developer community never saw it. No third-party plugins. No blog posts explaining why it works. No open source ecosystem.

Peter Steinberger took a different approach with OpenClaw. Open access led to more developers, more feedback, better documentation, and wider adoption. The compound effect is real:

  • More developers → more feedback → better documentation → more developers
  • Open ecosystem → plugins & integrations → wider adoption → more contributors

Goose never got that runway. A capable AI coding assistant that nobody heard about.

The “Ask First” vs. “Just Do It” Tradeoff

Some AI assistants run ahead and solve problems proactively. Others wait for explicit instructions. But here’s the thing — this is actually learnable behavior. A well-designed AI coding assistant can adapt to your development preferences:

  • “I’m debugging, don’t interrupt me with suggestions”
  • “I’m brainstorming, throw ideas at me”
  • “Just execute what I asked, don’t expand scope”
  • “Surface context I might have missed”

The best AI developer tools adapt to your workflow rather than forcing you to adapt to theirs.

What to Look For in an AI Coding Assistant

When choosing an AI assistant for software development, these are the dimensions that actually matter:

  1. Context awareness — Can it understand your codebase, project structure, and team conventions?
  2. Tool integration — Does it connect to Git, project management, CI/CD pipelines, and communication tools?
  3. Security & privacy — Where does your code and data go? Self-hosted and on-device options offer more control.
  4. Community & ecosystem — An active open source community means better documentation, more integrations, and faster issue resolution.
  5. Adaptability — Does it learn your preferences over time, or force you to conform to its defaults

Need Help Choosing the Right AI Developer Tools?

SBLOCK advises on AI tool selection, workflow integration, and automation strategy for software teams.

Get in Touch