Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llvm][aarch64][x86] Implement a lightweight spectre v1 mitigation, like MSVC /Qspectre #116450

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dpaoliello
Copy link
Contributor

Implements a form of load hardening as a mitigation against Spectre v1.

Unlike the other LLVM mitigations this mitigation like MSVC's /Qspectre flag where it provides less comprehensive coverage but is also cheap enough that it can be widely applied.

Specifically, this mitigation is trying to identify the pattern outlined in https://devblogs.microsoft.com/cppblog/spectre-mitigations-in-msvc that is, an offsetted load that is used to offset another load, both of which are guarded by a bounds check. For example:

if (untrusted_index < array1_length) {
    unsigned char value = array1[untrusted_index];
    unsigned char value2 = array2[value * 64];
}

The other case that this mitigation looks for is an indirect call from an offsetted load that is protected by a bounds check. For example:

if (index < funcs_len) {
  return funcs[index * 4]();
}

This mitigation will insert a new speculative_data_barrier intrinsic into the block with the second load or the indirect call. This intrinsice will be lowered to LFENCE on x86 and CSBD on AArch64.

@llvmbot
Copy link

llvmbot commented Nov 15, 2024

@llvm/pr-subscribers-llvm-transforms
@llvm/pr-subscribers-llvm-ir
@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-x86

Author: Daniel Paoliello (dpaoliello)

Changes

Implements a form of load hardening as a mitigation against Spectre v1.

Unlike the other LLVM mitigations this mitigation like MSVC's /Qspectre flag where it provides less comprehensive coverage but is also cheap enough that it can be widely applied.

Specifically, this mitigation is trying to identify the pattern outlined in <https://devblogs.microsoft.com/cppblog/spectre-mitigations-in-msvc> that is, an offsetted load that is used to offset another load, both of which are guarded by a bounds check. For example:

if (untrusted_index &lt; array1_length) {
    unsigned char value = array1[untrusted_index];
    unsigned char value2 = array2[value * 64];
}

The other case that this mitigation looks for is an indirect call from an offsetted load that is protected by a bounds check. For example:

if (index &lt; funcs_len) {
  return funcs[index * 4]();
}

This mitigation will insert a new speculative_data_barrier intrinsic into the block with the second load or the indirect call. This intrinsice will be lowered to LFENCE on x86 and CSBD on AArch64.


Patch is 27.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/116450.diff

15 Files Affected:

  • (modified) llvm/include/llvm/IR/Intrinsics.td (+3)
  • (modified) llvm/include/llvm/InitializePasses.h (+1)
  • (added) llvm/include/llvm/Transforms/Utils/GuardedLoadHardening.h (+31)
  • (modified) llvm/lib/CodeGen/IntrinsicLowering.cpp (+4)
  • (modified) llvm/lib/Passes/PassBuilder.cpp (+1)
  • (modified) llvm/lib/Passes/PassRegistry.def (+1)
  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+3)
  • (modified) llvm/lib/Target/AArch64/AArch64TargetMachine.cpp (+4)
  • (modified) llvm/lib/Target/X86/X86InstrCompiler.td (+3)
  • (modified) llvm/lib/Target/X86/X86TargetMachine.cpp (+4)
  • (modified) llvm/lib/Transforms/Utils/CMakeLists.txt (+1)
  • (added) llvm/lib/Transforms/Utils/GuardedLoadHardening.cpp (+288)
  • (added) llvm/test/CodeGen/AArch64/speculative-data-barrier.ll (+15)
  • (added) llvm/test/CodeGen/X86/speculative-data-barrier.ll (+15)
  • (added) llvm/test/Transforms/Util/guarded-load-hardening.ll (+245)
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 1ca8c2565ab0b6..9074bb18903a25 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -885,6 +885,9 @@ def int_readcyclecounter : DefaultAttrsIntrinsic<[llvm_i64_ty]>;
 
 def int_readsteadycounter : DefaultAttrsIntrinsic<[llvm_i64_ty]>;
 
+def int_speculative_data_barrier  : DefaultAttrsIntrinsic<[], [],
+                                            [IntrHasSideEffects]>;
+
 // The assume intrinsic is marked InaccessibleMemOnly so that proper control
 // dependencies will be maintained.
 def int_assume : DefaultAttrsIntrinsic<
diff --git a/llvm/include/llvm/InitializePasses.h b/llvm/include/llvm/InitializePasses.h
index 7ecd59a14f709a..35976931d566b6 100644
--- a/llvm/include/llvm/InitializePasses.h
+++ b/llvm/include/llvm/InitializePasses.h
@@ -126,6 +126,7 @@ void initializeGVNLegacyPassPass(PassRegistry &);
 void initializeGlobalMergeFuncPassWrapperPass(PassRegistry &);
 void initializeGlobalMergePass(PassRegistry &);
 void initializeGlobalsAAWrapperPassPass(PassRegistry &);
+void initializeGuardedLoadHardeningPass(PassRegistry &);
 void initializeHardwareLoopsLegacyPass(PassRegistry &);
 void initializeMIRProfileLoaderPassPass(PassRegistry &);
 void initializeIRSimilarityIdentifierWrapperPassPass(PassRegistry &);
diff --git a/llvm/include/llvm/Transforms/Utils/GuardedLoadHardening.h b/llvm/include/llvm/Transforms/Utils/GuardedLoadHardening.h
new file mode 100644
index 00000000000000..2e07181bfffb56
--- /dev/null
+++ b/llvm/include/llvm/Transforms/Utils/GuardedLoadHardening.h
@@ -0,0 +1,31 @@
+//=== GuardedLoadHardening.h - Lightweight spectre v1 mitigation *- C++ -*===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===---------------------------------------------------------------------===//
+// Lightweight load hardening as a mitigation against Spectre v1.
+//===---------------------------------------------------------------------===//
+
+#ifndef LLVM_TRANSFORMS_GUARDEDLOADHARDENING_H
+#define LLVM_TRANSFORMS_GUARDEDLOADHARDENING_H
+
+#include "llvm/IR/PassManager.h"
+
+namespace llvm {
+
+class FunctionPass;
+
+class GuardedLoadHardeningPass
+    : public PassInfoMixin<GuardedLoadHardeningPass> {
+public:
+  GuardedLoadHardeningPass() = default;
+  PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM);
+};
+
+FunctionPass *createGuardedLoadHardeningPass();
+
+} // namespace llvm
+
+#endif
diff --git a/llvm/lib/CodeGen/IntrinsicLowering.cpp b/llvm/lib/CodeGen/IntrinsicLowering.cpp
index f799a8cfc1ba7e..fd2fb5f5f0ffbd 100644
--- a/llvm/lib/CodeGen/IntrinsicLowering.cpp
+++ b/llvm/lib/CodeGen/IntrinsicLowering.cpp
@@ -324,6 +324,10 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     break;
   }
 
+  case Intrinsic::speculative_data_barrier:
+    break; // Simply strip out speculative_data_barrier on unsupported
+           // architectures
+
   case Intrinsic::dbg_declare:
   case Intrinsic::dbg_label:
     break;    // Simply strip out debugging intrinsics
diff --git a/llvm/lib/Passes/PassBuilder.cpp b/llvm/lib/Passes/PassBuilder.cpp
index a181a28f502f59..ca54f9fb92d9da 100644
--- a/llvm/lib/Passes/PassBuilder.cpp
+++ b/llvm/lib/Passes/PassBuilder.cpp
@@ -308,6 +308,7 @@
 #include "llvm/Transforms/Utils/Debugify.h"
 #include "llvm/Transforms/Utils/EntryExitInstrumenter.h"
 #include "llvm/Transforms/Utils/FixIrreducible.h"
+#include "llvm/Transforms/Utils/GuardedLoadHardening.h"
 #include "llvm/Transforms/Utils/HelloWorld.h"
 #include "llvm/Transforms/Utils/IRNormalizer.h"
 #include "llvm/Transforms/Utils/InjectTLIMappings.h"
diff --git a/llvm/lib/Passes/PassRegistry.def b/llvm/lib/Passes/PassRegistry.def
index 7c3798f6462a46..f451ade4a295a9 100644
--- a/llvm/lib/Passes/PassRegistry.def
+++ b/llvm/lib/Passes/PassRegistry.def
@@ -370,6 +370,7 @@ FUNCTION_PASS("flatten-cfg", FlattenCFGPass())
 FUNCTION_PASS("float2int", Float2IntPass())
 FUNCTION_PASS("gc-lowering", GCLoweringPass())
 FUNCTION_PASS("guard-widening", GuardWideningPass())
+FUNCTION_PASS("guarded-load-hardening", GuardedLoadHardeningPass())
 FUNCTION_PASS("gvn-hoist", GVNHoistPass())
 FUNCTION_PASS("gvn-sink", GVNSinkPass())
 FUNCTION_PASS("helloworld", HelloWorldPass())
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index 10e34a83a10da1..4818a638584cb7 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -10607,6 +10607,9 @@ let Predicates = [HasLSFE] in {
 let Uses = [FPMR, FPCR] in
 defm FMMLA : SIMDThreeSameVectorFP8MatrixMul<"fmmla">;
 
+// Use the CSDB instruction as a barrier.
+def : Pat<(int_speculative_data_barrier), (HINT 0x14)>;
+
 include "AArch64InstrAtomics.td"
 include "AArch64SVEInstrInfo.td"
 include "AArch64SMEInstrInfo.td"
diff --git a/llvm/lib/Target/AArch64/AArch64TargetMachine.cpp b/llvm/lib/Target/AArch64/AArch64TargetMachine.cpp
index 074f39c19fdb24..025b23993eca28 100644
--- a/llvm/lib/Target/AArch64/AArch64TargetMachine.cpp
+++ b/llvm/lib/Target/AArch64/AArch64TargetMachine.cpp
@@ -49,6 +49,7 @@
 #include "llvm/TargetParser/Triple.h"
 #include "llvm/Transforms/CFGuard.h"
 #include "llvm/Transforms/Scalar.h"
+#include "llvm/Transforms/Utils/GuardedLoadHardening.h"
 #include "llvm/Transforms/Utils/LowerIFunc.h"
 #include "llvm/Transforms/Vectorize/LoopIdiomVectorize.h"
 #include <memory>
@@ -669,6 +670,9 @@ void AArch64PassConfig::addIRPasses() {
       addPass(createCFGuardCheckPass());
   }
 
+  // Lightweight spectre v1 mitigation.
+  addPass(createGuardedLoadHardeningPass());
+
   if (TM->Options.JMCInstrument)
     addPass(createJMCInstrumenterPass());
 }
diff --git a/llvm/lib/Target/X86/X86InstrCompiler.td b/llvm/lib/Target/X86/X86InstrCompiler.td
index ea0b66c2f55162..fab982fdd68932 100644
--- a/llvm/lib/Target/X86/X86InstrCompiler.td
+++ b/llvm/lib/Target/X86/X86InstrCompiler.td
@@ -2213,3 +2213,6 @@ def : Pat<(cttz_zero_undef (loadi64 addr:$src)), (BSF64rm addr:$src)>;
 let Predicates = [HasMOVBE] in {
  def : Pat<(bswap GR16:$src), (ROL16ri GR16:$src, (i8 8))>;
 }
+
+// Use the LFENCE instruction as a barrier.
+def : Pat<(int_speculative_data_barrier), (LFENCE)>;
\ No newline at end of file
diff --git a/llvm/lib/Target/X86/X86TargetMachine.cpp b/llvm/lib/Target/X86/X86TargetMachine.cpp
index 20dfdd27b33df6..e3a85adf09409c 100644
--- a/llvm/lib/Target/X86/X86TargetMachine.cpp
+++ b/llvm/lib/Target/X86/X86TargetMachine.cpp
@@ -48,6 +48,7 @@
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/TargetParser/Triple.h"
 #include "llvm/Transforms/CFGuard.h"
+#include "llvm/Transforms/Utils/GuardedLoadHardening.h"
 #include <memory>
 #include <optional>
 #include <string>
@@ -492,6 +493,9 @@ void X86PassConfig::addIRPasses() {
     }
   }
 
+  // Lightweight spectre v1 mitigation.
+  addPass(createGuardedLoadHardeningPass());
+
   if (TM->Options.JMCInstrument)
     addPass(createJMCInstrumenterPass());
 }
diff --git a/llvm/lib/Transforms/Utils/CMakeLists.txt b/llvm/lib/Transforms/Utils/CMakeLists.txt
index 65bd3080662c4d..503b0cdb080d4a 100644
--- a/llvm/lib/Transforms/Utils/CMakeLists.txt
+++ b/llvm/lib/Transforms/Utils/CMakeLists.txt
@@ -30,6 +30,7 @@ add_llvm_component_library(LLVMTransformUtils
   FunctionComparator.cpp
   FunctionImportUtils.cpp
   GlobalStatus.cpp
+  GuardedLoadHardening.cpp
   GuardUtils.cpp
   HelloWorld.cpp
   InlineFunction.cpp
diff --git a/llvm/lib/Transforms/Utils/GuardedLoadHardening.cpp b/llvm/lib/Transforms/Utils/GuardedLoadHardening.cpp
new file mode 100644
index 00000000000000..c2c50108bed81a
--- /dev/null
+++ b/llvm/lib/Transforms/Utils/GuardedLoadHardening.cpp
@@ -0,0 +1,288 @@
+//=== GuardedLoadHardening.cpp -Lightweight spectre v1 mitigation *- C++ -*===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// Implements a form of load hardening as a mitigation against Spectre v1.
+// Unlike the other [LLVM mitigations](/llvm/docs/SpeculativeLoadHardening.md)
+// this mitigation more like MSVC's /Qspectre flag where it provides less
+// comprehensive coverage but is also cheap enough that it can be widely
+// applied.
+//
+// Specifically this mitigation is trying to identify the pattern outlined in
+// <https://devblogs.microsoft.com/cppblog/spectre-mitigations-in-msvc>
+// that is, an offsetted load that is used to offset another load, both of which
+// are guarded by a bounds check. For example:
+// ```cpp
+// if (untrusted_index < array1_length) {
+//     unsigned char value = array1[untrusted_index];
+//     unsigned char value2 = array2[value * 64];
+// }
+// ```
+//
+// The other case that this mitigation looks for is an indirect call from an
+// offsetted load that is protected by a bounds check. For example:
+// ```cpp
+// if (index < funcs_len) {
+//   return funcs[index * 4]();
+// }
+// ```
+//
+// This mitigation will insert the `speculative_data_barrier` intrinsic into the
+// block with the second load or the indirect call.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Transforms/Utils/GuardedLoadHardening.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/IR/IRBuilder.h"
+#include "llvm/InitializePasses.h"
+#include "llvm/Pass.h"
+#include "llvm/Support/CommandLine.h"
+
+using namespace llvm;
+
+#define DEBUG_TYPE "guarded-load-hardening"
+
+static cl::opt<bool>
+    EnableGuardedLoadHardening("guarded-load-hardening",
+                               cl::desc("Enable guarded load hardening"),
+                               cl::init(false), cl::Hidden);
+
+STATISTIC(NumIntrInserted, "Intrinsics inserted");
+STATISTIC(CandidateBlocks, "Candidate blocks discovered");
+STATISTIC(OffsettedLoads, "Offsetted loads discovered");
+STATISTIC(DownstreamInstr, "Downstream loads or calls discovered");
+STATISTIC(OffsettedLoadsRemoved, "Candidate offsetted loads removed");
+
+namespace {
+
+class GuardedLoadHardening : public FunctionPass {
+public:
+  static char ID;
+
+  // Default constructor required for the INITIALIZE_PASS macro.
+  GuardedLoadHardening() : FunctionPass(ID) {}
+
+  bool runOnFunction(Function &F) override;
+};
+
+} // end anonymous namespace
+
+/// Visits the given value and all of its operands recursively, if they are of a
+/// type that is interesting to this analysis.
+bool visitDependencies(const Value &Start,
+                       const std::function<bool(const Value &)> &Visitor) {
+  SmallVector<const Value *, 4> Worklist{&Start};
+  while (!Worklist.empty()) {
+    auto *Item = Worklist.pop_back_val();
+    if (isa<Argument>(Item)) {
+      if (Visitor(*Item)) {
+        return true;
+      }
+    } else if (auto *Inst = dyn_cast<Instruction>(Item)) {
+      // Only visit the operands of unary, binary, and cast instructions. There
+      // are many other instructions that could be unwrapped here (e.g., Phi
+      // nodes, SelectInst), but they make the analysis too expensive.
+      if (Inst->isUnaryOp() || Inst->isBinaryOp() || Inst->isCast()) {
+        Worklist.append(Inst->value_op_begin(), Inst->value_op_end());
+      } else if (isa<CallInst>(Inst) || isa<LoadInst>(Inst) ||
+                 isa<AllocaInst>(Inst)) {
+        if (Visitor(*Item)) {
+          return true;
+        }
+      }
+    }
+  }
+
+  return false;
+}
+
+/// Gathers the given value and all of its operands recursively, if they are of
+/// a type that is interesting to this analysis.
+void gatherDependencies(const Value &Start,
+                        std::vector<const Value *> &Dependencies) {
+  visitDependencies(Start, [&](const Value &V) {
+    Dependencies.push_back(&V);
+    return false;
+  });
+}
+
+/// Checks if the given instruction is an offsetted load and returns the indices
+/// used to offset that load.
+std::optional<iterator_range<User::const_op_iterator>>
+tryGetIndicesIfOffsettedLoad(const Value &I) {
+  if (auto *Load = dyn_cast<LoadInst>(&I)) {
+    if (auto *GEP = dyn_cast<GetElementPtrInst>(Load->getPointerOperand())) {
+      if (GEP->hasIndices() && !GEP->hasAllConstantIndices()) {
+        return GEP->indices();
+      }
+    }
+  }
+  return std::nullopt;
+}
+
+/// Tries to get the comparison instruction if the given block is guarded by a
+/// relative integer comparison.
+std::optional<const ICmpInst *>
+tryGetComparisonIfGuarded(const BasicBlock &BB) {
+  if (auto *PredBB = BB.getSinglePredecessor()) {
+    if (auto *CondBranch = dyn_cast<BranchInst>(PredBB->getTerminator())) {
+      if (CondBranch->isConditional()) {
+        if (auto *Comparison = dyn_cast<ICmpInst>(CondBranch->getCondition())) {
+          if (Comparison->isRelational()) {
+            return Comparison;
+          }
+        }
+      }
+    }
+  }
+
+  return std::nullopt;
+}
+
+/// Does the given value use an offsetted load that requires protection?
+bool useRequiresProtection(const Value &MightUseIndex,
+                           const ICmpInst &Comparison,
+                           SmallVector<std::pair<const Value *, const Value *>,
+                                       4> &OffsettedLoadAndUses) {
+
+  SmallVector<const Value *, 4> OffsettedLoadIndexesToRemove;
+  for (auto &LoadAndUse : OffsettedLoadAndUses) {
+    if ((&MightUseIndex == LoadAndUse.second) &&
+        !is_contained(OffsettedLoadIndexesToRemove, LoadAndUse.first)) {
+      ++DownstreamInstr;
+
+      // If we've found a use of one of the offsetted loads, then we need to
+      // check if that offsetted load uses a value that is also used in the
+      // comparison.
+      std::vector<const Value *> ComparisonDependencies;
+      gatherDependencies(*Comparison.getOperand(0), ComparisonDependencies);
+      gatherDependencies(*Comparison.getOperand(1), ComparisonDependencies);
+
+      for (auto &Index : *tryGetIndicesIfOffsettedLoad(*LoadAndUse.first)) {
+        if (!isa<Constant>(&Index) &&
+            visitDependencies(*Index, [&](const Value &V) {
+              return is_contained(ComparisonDependencies, &V);
+            })) {
+          return true;
+        }
+      }
+
+      // The offsetted load doesn't use any of the values in the comparison, so
+      // remove it from the list since we never need to check it again.
+      OffsettedLoadIndexesToRemove.push_back(LoadAndUse.first);
+      ++OffsettedLoadsRemoved;
+    }
+  }
+
+  for (auto *IndexToRemove : OffsettedLoadIndexesToRemove) {
+    OffsettedLoadAndUses.erase(
+        std::remove_if(
+            OffsettedLoadAndUses.begin(), OffsettedLoadAndUses.end(),
+            [&](const auto &Pair) { return Pair.first == IndexToRemove; }),
+        OffsettedLoadAndUses.end());
+  }
+  return false;
+}
+
+bool runOnFunctionImpl(Function &F) {
+  SmallVector<BasicBlock *, 4> BlocksToProtect;
+  for (auto &BB : F) {
+    // Check for guarded loads that need to be protected.
+    if (auto Comparison = tryGetComparisonIfGuarded(BB)) {
+      ++CandidateBlocks;
+      SmallVector<std::pair<const Value *, const Value *>, 4>
+          OffsettedLoadAndUses;
+      for (auto &I : BB) {
+        if (OffsettedLoadAndUses.empty()) {
+          if (tryGetIndicesIfOffsettedLoad(I)) {
+            OffsettedLoadAndUses.emplace_back(&I, &I);
+            ++OffsettedLoads;
+          }
+        } else {
+          // Case 1: Look for an indirect call where the target is an offsetted
+          // load.
+          if (auto *Call = dyn_cast<CallInst>(&I)) {
+            if (Call->isIndirectCall() &&
+                useRequiresProtection(*Call->getCalledOperand(), **Comparison,
+                                      OffsettedLoadAndUses)) {
+              BlocksToProtect.push_back(&BB);
+              break;
+            }
+
+            // Case 2: Look for an offsetted load that is used as an index.
+          } else if (auto DependentIndexOp = tryGetIndicesIfOffsettedLoad(I)) {
+            for (auto &Op : *DependentIndexOp) {
+              if (!isa<Constant>(&Op) &&
+                  useRequiresProtection(*Op, **Comparison,
+                                        OffsettedLoadAndUses)) {
+                BlocksToProtect.push_back(&BB);
+                break;
+              }
+            }
+
+            OffsettedLoadAndUses.emplace_back(&I, &I);
+            ++OffsettedLoads;
+
+            // Otherwise, check if this value uses something from an offsetted
+            // load or one of its downstreams.
+          } else if (auto *Instr = dyn_cast<Instruction>(&I)) {
+            if (Instr->isUnaryOp() || Instr->isBinaryOp() || Instr->isCast()) {
+              for (auto &Op : Instr->operands()) {
+                // If any use of an offsetted load is used by this instruction,
+                // then add this instruction as a use of that offsetted load as
+                // well.
+                for (auto &LoadAndUse : OffsettedLoadAndUses) {
+                  if (Op.get() == LoadAndUse.second) {
+                    OffsettedLoadAndUses.emplace_back(LoadAndUse.first, Instr);
+                    break;
+                  }
+                }
+              }
+            }
+          }
+        }
+      }
+    }
+  }
+  if (BlocksToProtect.empty()) {
+    return false;
+  }
+
+  // Add a barrier to each block that requires protection.
+  for (auto *BB : BlocksToProtect) {
+    IRBuilder<> Builder(&BB->front());
+    Builder.CreateIntrinsic(Intrinsic::speculative_data_barrier, {}, {});
+    ++NumIntrInserted;
+  }
+
+  return true;
+}
+
+char GuardedLoadHardening::ID = 0;
+INITIALIZE_PASS(GuardedLoadHardening, "GuardedLoadHardening",
+                "GuardedLoadHardening", false, false)
+
+bool GuardedLoadHardening::runOnFunction(Function &F) {
+  if (EnableGuardedLoadHardening) {
+    return runOnFunctionImpl(F);
+  }
+  return false;
+}
+
+PreservedAnalyses GuardedLoadHardeningPass::run(Function &F,
+                                                FunctionAnalysisManager &FAM) {
+  bool Changed = false;
+  if (EnableGuardedLoadHardening) {
+    Changed = runOnFunctionImpl(F);
+  }
+  return Changed ? PreservedAnalyses::none() : PreservedAnalyses::all();
+}
+
+FunctionPass *llvm::createGuardedLoadHardeningPass() {
+  return new GuardedLoadHardening();
+}
\ No newline at end of file
diff --git a/llvm/test/CodeGen/AArch64/speculative-data-barrier.ll b/llvm/test/CodeGen/AArch64/speculative-data-barrier.ll
new file mode 100644
index 00000000000000..e34c46f70802b6
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/speculative-data-barrier.ll
@@ -0,0 +1,15 @@
+; RUN: llc -verify-machineinstrs -o - %s -mtriple=aarch64-linux-gnu | FileCheck %s
+
+; CHECK-LABEL:  f:
+; CHECK:        %bb.0:
+; CHECK-NEXT:       csdb
+; CHECK-NEXT:       ret
+define dso_local void @f() {
+  call void @llvm.speculative.data.barrier()
+  ret void
+}
+
+; Function Attrs: nocallback nofree nosync nounwind willreturn
+declare void @llvm.speculative.data.barrier() #0
+
+attributes #0 = { nocallback nofree nosync nounwind willreturn }
diff --git a/llvm/test/CodeGen/X86/speculative-data-barrier.ll b/llvm/test/CodeGen/X86/speculative-data-barrier.ll
new file mode 100644
index 00000000000000..e8d9a0a09830c7
--- /dev/null
+++ b/llvm/test/CodeGen/X86/speculative-data-barrier.ll
@@ -0,0 +1,15 @@
+; RUN: llc -verify-machineinstrs -o - %s -mtriple=x86_64-linux-gnu | FileCheck %s
+
+; CHECK-LABEL:  f:
+; CHECK:        %bb.0:
+; CHECK-NEXT:       lfence
+; CHECK-NEXT:       ret
+define dso_local void @f() {
+  call void @llvm.speculative.data.barrier()
+  ret void
+}
+
+; Function Attrs: nocallback nofree nosync nounwind willreturn
+declare void @llvm.speculative.data.barrier() #0
+
+attributes #0 = { nocallback nofree nosync nounwind willreturn }
diff --git a/llvm/test/Transforms/Util/guarded-load-hardening.ll b/llvm/test/Transforms/Util/guarded-load-hardening.ll
new file mode 100644
index 00000000000000..79db6ed0020d18
--- /dev/null
+++ b/llvm/test/Transforms/Util/guarded-load-hardening.ll
@@ -0,0 +1,245 @@
+; RUN: opt -S -passes=guarded-load-hardening -guarded-load-hardening < %s | FileCheck %s --check-prefix ON
+; RUN: opt -S...
[truncated]

@chandlerc
Copy link
Member

FWIW, I do not recommend the mitigation approach you cited from MSVC given the concerns raised in the security community about its efficacy: https://www.paulkocher.com/doc/MicrosoftCompilerSpectreMitigation.html

It's worth noting that this technique and the patterns used were developed when Spectre was very new, and there is a large body of research since that I think has expanded the security community's understanding of the full scope of these issues.

For example, there is also a systematic review of the different categories and structures of Spectre-style attacks and the defenses for them here: https://www.usenix.org/system/files/sec19-canella.pdf

If LLVM is going to take on complexity to support compiler-based Spectre mitigations, I would encourage it to be documented based on the taxonomy in that systematic review. It would also be good to know if any of the researchers from this space have evaluated this new mixture of technique? I think it would be important to have some research or other supporting evidence for the effectiveness of mitigations that we continue to carry in-tree.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants