File gcc7-aarch64-sls-miti-3.patch of Package cross-m68k-gcc7
xxxxxxxxxx
1
Backport of below commit for bsc#1172798
2
3
commit 2155170525f93093b90a1a065e7ed71a925566e9
4
Author: Matthew Malcomson <matthew.malcomson@arm.com>
5
Date: Thu Jul 9 09:11:59 2020 +0100
6
7
aarch64: Mitigate SLS for BLR instruction
8
9
This patch introduces the mitigation for Straight Line Speculation past
10
the BLR instruction.
11
12
This mitigation replaces BLR instructions with a BL to a stub which uses
13
a BR to jump to the original value. These function stubs are then
14
appended with a speculation barrier to ensure no straight line
15
speculation happens after these jumps.
16
17
When optimising for speed we use a set of stubs for each function since
18
this should help the branch predictor make more accurate predictions
19
about where a stub should branch.
20
21
When optimising for size we use one set of stubs for all functions.
22
This set of stubs can have human readable names, and we are using
23
`__call_indirect_x<N>` for register x<N>.
24
25
When BTI branch protection is enabled the BLR instruction can jump to a
26
`BTI c` instruction using any register, while the BR instruction can
27
only jump to a `BTI c` instruction using the x16 or x17 registers.
28
Hence, in order to ensure this transformation is safe we mov the value
29
of the original register into x16 and use x16 for the BR.
30
31
As an example when optimising for size:
32
a
33
BLR x0
34
instruction would get transformed to something like
35
BL __call_indirect_x0
36
where __call_indirect_x0 labels a thunk that contains
37
__call_indirect_x0:
38
MOV X16, X0
39
BR X16
40
<speculation barrier>
41
42
The first version of this patch used local symbols specific to a
43
compilation unit to try and avoid relocations.
44
This was mistaken since functions coming from the same compilation unit
45
can still be in different sections, and the assembler will insert
46
relocations at jumps between sections.
47
48
On any relocation the linker is permitted to emit a veneer to handle
49
jumps between symbols that are very far apart. The registers x16 and
50
x17 may be clobbered by these veneers.
51
Hence the function stubs cannot rely on the values of x16 and x17 being
52
the same as just before the function stub is called.
53
54
Similar can be said for the hot/cold partitioning of single functions,
55
so function-local stubs have the same restriction.
56
57
This updated version of the patch never emits function stubs for x16 and
58
x17, and instead forces other registers to be used.
59
60
Given the above, there is now no benefit to local symbols (since they
61
are not enough to avoid dealing with linker intricacies). This patch
62
now uses global symbols with hidden visibility each stored in their own
63
COMDAT section. This means stubs can be shared between compilation
64
units while still avoiding the PLT indirection.
65
66
This patch also removes the `__call_indirect_x30` stub (and
67
function-local equivalent) which would simply jump back to the original
68
location.
69
70
The function-local stubs are emitted to the assembly output file in one
71
chunk, which means we need not add the speculation barrier directly
72
after each one.
73
This is because we know for certain that the instructions directly after
74
the BR in all but the last function stub will be from another one of
75
these stubs and hence will not contain a speculation gadget.
76
Instead we add a speculation barrier at the end of the sequence of
77
stubs.
78
79
The global stubs are emitted in COMDAT/.linkonce sections by
80
themselves so that the linker can remove duplicates from multiple object
81
files. This means they are not emitted in one chunk, and each one must
82
include the speculation barrier.
83
84
Another difference is that since the global stubs are shared across
85
compilation units we do not know that all functions will be targeting an
86
architecture supporting the SB instruction.
87
Rather than provide multiple stubs for each architecture, we provide a
88
stub that will work for all architectures -- using the DSB+ISB barrier.
89
90
This mitigation does not apply for BLR instructions in the following
91
places:
92
- Some accesses to thread-local variables use a code sequence with a BLR
93
instruction. This code sequence is part of the binary interface between
94
compiler and linker. If this BLR instruction needs to be mitigated, it'd
95
probably be best to do so in the linker. It seems that the code sequence
96
for thread-local variable access is unlikely to lead to a Spectre Revalation
97
Gadget.
98
- PLT stubs are produced by the linker and each contain a BLR instruction.
99
It seems that at most only after the last PLT stub a Spectre Revalation
100
Gadget might appear.
101
102
Testing:
103
Bootstrap and regtest on AArch64
104
(with BOOT_CFLAGS="-mharden-sls=retbr,blr")
105
Used a temporary hack(1) in gcc-dg.exp to use these options on every
106
test in the testsuite, a slight modification to emit the speculation
107
barrier after every function stub, and a script to check that the
108
output never emitted a BLR, or unmitigated BR or RET instruction.
109
Similar on an aarch64-none-elf cross-compiler.
110
111
1) Temporary hack emitted a speculation barrier at the end of every stub
112
function, and used a script to ensure that:
113
a) Every RET or BR is immediately followed by a speculation barrier.
114
b) No BLR instruction is emitted by compiler.
115
116
(cherry picked from 96b7f495f9269d5448822e4fc28882edb35a58d7)
117
118
gcc/ChangeLog:
119
120
* config/aarch64/aarch64-protos.h (aarch64_indirect_call_asm):
121
New declaration.
122
* config/aarch64/aarch64.c (aarch64_regno_regclass): Handle new
123
stub registers class.
124
(aarch64_class_max_nregs): Likewise.
125
(aarch64_register_move_cost): Likewise.
126
(aarch64_sls_shared_thunks): Global array to store stub labels.
127
(aarch64_sls_emit_function_stub): New.
128
(aarch64_create_blr_label): New.
129
(aarch64_sls_emit_blr_function_thunks): New.
130
(aarch64_sls_emit_shared_blr_thunks): New.
131
(aarch64_asm_file_end): New.
132
(aarch64_indirect_call_asm): New.
133
(TARGET_ASM_FILE_END): Use aarch64_asm_file_end.
134
(TARGET_ASM_FUNCTION_EPILOGUE): Use
135
aarch64_sls_emit_blr_function_thunks.
136
* config/aarch64/aarch64.h (STB_REGNUM_P): New.
137
(enum reg_class): Add STUB_REGS class.
138
(machine_function): Introduce `call_via` array for
139
function-local stub labels.
140
* config/aarch64/aarch64.md (*call_insn, *call_value_insn): Use
141
aarch64_indirect_call_asm to emit code when hardening BLR
142
instructions.
143
* config/aarch64/constraints.md (Ucr): New constraint
144
representing registers for indirect calls. Is GENERAL_REGS
145
usually, and STUB_REGS when hardening BLR instruction against
146
SLS.
147
* config/aarch64/predicates.md (aarch64_general_reg): STUB_REGS class
148
is also a general register.
149
150
gcc/testsuite/ChangeLog:
151
152
* gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c: New test.
153
* gcc.target/aarch64/sls-mitigation/sls-miti-blr.c: New test.
154
155
Index: gcc-7.5.0+r278197/gcc/config/aarch64/aarch64-protos.h
156
===================================================================
157
--- gcc-7.5.0+r278197.orig/gcc/config/aarch64/aarch64-protos.h
158
+++ gcc-7.5.0+r278197/gcc/config/aarch64/aarch64-protos.h
159
160
extern const atomic_ool_names aarch64_ool_ldeor_names;
161
162
const char *aarch64_sls_barrier (int);
163
+const char *aarch64_indirect_call_asm (rtx);
164
extern bool aarch64_harden_sls_retbr_p (void);
165
extern bool aarch64_harden_sls_blr_p (void);
166
167
Index: gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.c
168
===================================================================
169
--- gcc-7.5.0+r278197.orig/gcc/config/aarch64/aarch64.c
170
+++ gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.c
171
172
enum reg_class
173
aarch64_regno_regclass (unsigned regno)
174
{
175
+ if (STUB_REGNUM_P (regno))
176
+ return STUB_REGS;
177
+
178
if (GP_REGNUM_P (regno))
179
return GENERAL_REGS;
180
181
182
{
183
switch (regclass)
184
{
185
+ case STUB_REGS:
186
case TAILCALL_ADDR_REGS:
187
case POINTER_REGS:
188
case GENERAL_REGS:
189
190
= aarch64_tune_params.regmove_cost;
191
192
/* Caller save and pointer regs are equivalent to GENERAL_REGS. */
193
- if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS)
194
+ if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS
195
+ || to == STUB_REGS)
196
to = GENERAL_REGS;
197
198
- if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS)
199
+ if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS
200
+ || from == STUB_REGS)
201
from = GENERAL_REGS;
202
203
/* Moving between GPR and stack cost is the same as GP2GP. */
204
205
: "";
206
}
207
208
+static GTY (()) tree aarch64_sls_shared_thunks[30];
209
+static GTY (()) bool aarch64_sls_shared_thunks_needed = false;
210
+const char *indirect_symbol_names[30] = {
211
+ "__call_indirect_x0",
212
+ "__call_indirect_x1",
213
+ "__call_indirect_x2",
214
+ "__call_indirect_x3",
215
+ "__call_indirect_x4",
216
+ "__call_indirect_x5",
217
+ "__call_indirect_x6",
218
+ "__call_indirect_x7",
219
+ "__call_indirect_x8",
220
+ "__call_indirect_x9",
221
+ "__call_indirect_x10",
222
+ "__call_indirect_x11",
223
+ "__call_indirect_x12",
224
+ "__call_indirect_x13",
225
+ "__call_indirect_x14",
226
+ "__call_indirect_x15",
227
+ "", /* "__call_indirect_x16", */
228
+ "", /* "__call_indirect_x17", */
229
+ "__call_indirect_x18",
230
+ "__call_indirect_x19",
231
+ "__call_indirect_x20",
232
+ "__call_indirect_x21",
233
+ "__call_indirect_x22",
234
+ "__call_indirect_x23",
235
+ "__call_indirect_x24",
236
+ "__call_indirect_x25",
237
+ "__call_indirect_x26",
238
+ "__call_indirect_x27",
239
+ "__call_indirect_x28",
240
+ "__call_indirect_x29",
241
+};
242
+
243
+/* Function to create a BLR thunk. This thunk is used to mitigate straight
244
+ line speculation. Instead of a simple BLR that can be speculated past,
245
+ we emit a BL to this thunk, and this thunk contains a BR to the relevant
246
+ register. These thunks have the relevant speculation barries put after
247
+ their indirect branch so that speculation is blocked.
248
+
249
+ We use such a thunk so the speculation barriers are kept off the
250
+ architecturally executed path in order to reduce the performance overhead.
251
+
252
+ When optimizing for size we use stubs shared by the linked object.
253
+ When optimizing for performance we emit stubs for each function in the hope
254
+ that the branch predictor can better train on jumps specific for a given
255
+ function. */
256
+rtx
257
+aarch64_sls_create_blr_label (int regnum)
258
+{
259
+ gcc_assert (STUB_REGNUM_P (regnum));
260
+ if (optimize_function_for_size_p (cfun))
261
+ {
262
+ /* For the thunks shared between different functions in this compilation
263
+ unit we use a named symbol -- this is just for users to more easily
264
+ understand the generated assembly. */
265
+ aarch64_sls_shared_thunks_needed = true;
266
+ const char *thunk_name = indirect_symbol_names[regnum];
267
+ if (aarch64_sls_shared_thunks[regnum] == NULL)
268
+ {
269
+ /* Build a decl representing this function stub and record it for
270
+ later. We build a decl here so we can use the GCC machinery for
271
+ handling sections automatically (through `get_named_section` and
272
+ `make_decl_one_only`). That saves us a lot of trouble handling
273
+ the specifics of different output file formats. */
274
+ tree decl = build_decl (BUILTINS_LOCATION, FUNCTION_DECL,
275
+ get_identifier (thunk_name),
276
+ build_function_type_list (void_type_node,
277
+ NULL_TREE));
278
+ DECL_RESULT (decl) = build_decl (BUILTINS_LOCATION, RESULT_DECL,
279
+ NULL_TREE, void_type_node);
280
+ TREE_PUBLIC (decl) = 1;
281
+ TREE_STATIC (decl) = 1;
282
+ DECL_IGNORED_P (decl) = 1;
283
+ DECL_ARTIFICIAL (decl) = 1;
284
+ make_decl_one_only (decl, DECL_ASSEMBLER_NAME (decl));
285
+ resolve_unique_section (decl, 0, false);
286
+ aarch64_sls_shared_thunks[regnum] = decl;
287
+ }
288
+
289
+ return gen_rtx_SYMBOL_REF (Pmode, thunk_name);
290
+ }
291
+
292
+ if (cfun->machine->call_via[regnum] == NULL)
293
+ cfun->machine->call_via[regnum]
294
+ = gen_rtx_LABEL_REF (Pmode, gen_label_rtx ());
295
+ return cfun->machine->call_via[regnum];
296
+}
297
+
298
+/* Helper function for aarch64_sls_emit_blr_function_thunks and
299
+ aarch64_sls_emit_shared_blr_thunks below. */
300
+static void
301
+aarch64_sls_emit_function_stub (FILE *out_file, int regnum)
302
+{
303
+ /* Save in x16 and branch to that function so this transformation does
304
+ not prevent jumping to `BTI c` instructions. */
305
+ asm_fprintf (out_file, "\tmov\tx16, x%d\n", regnum);
306
+ asm_fprintf (out_file, "\tbr\tx16\n");
307
+}
308
+
309
+/* Emit all BLR stubs for this particular function.
310
+ Here we emit all the BLR stubs needed for the current function. Since we
311
+ emit these stubs in a consecutive block we know there will be no speculation
312
+ gadgets between each stub, and hence we only emit a speculation barrier at
313
+ the end of the stub sequences.
314
+
315
+ This is called in the TARGET_ASM_FUNCTION_EPILOGUE hook. */
316
+void
317
+aarch64_sls_emit_blr_function_thunks (FILE *out_file, HOST_WIDE_INT)
318
+{
319
+ if (! aarch64_harden_sls_blr_p ())
320
+ return;
321
+
322
+ bool any_functions_emitted = false;
323
+ /* We must save and restore the current function section since this assembly
324
+ is emitted at the end of the function. This means it can be emitted *just
325
+ after* the cold section of a function. That cold part would be emitted in
326
+ a different section. That switch would trigger a `.cfi_endproc` directive
327
+ to be emitted in the original section and a `.cfi_startproc` directive to
328
+ be emitted in the new section. Switching to the original section without
329
+ restoring would mean that the `.cfi_endproc` emitted as a function ends
330
+ would happen in a different section -- leaving an unmatched
331
+ `.cfi_startproc` in the cold text section and an unmatched `.cfi_endproc`
332
+ in the standard text section. */
333
+ section *save_text_section = in_section;
334
+ switch_to_section (function_section (current_function_decl));
335
+ for (int regnum = 0; regnum < 30; ++regnum)
336
+ {
337
+ rtx specu_label = cfun->machine->call_via[regnum];
338
+ if (specu_label == NULL)
339
+ continue;
340
+
341
+ targetm.asm_out.print_operand (out_file, specu_label, 0);
342
+ asm_fprintf (out_file, ":\n");
343
+ aarch64_sls_emit_function_stub (out_file, regnum);
344
+ any_functions_emitted = true;
345
+ }
346
+ if (any_functions_emitted)
347
+ /* Can use the SB if needs be here, since this stub will only be used
348
+ by the current function, and hence for the current target. */
349
+ asm_fprintf (out_file, "\t%s\n", aarch64_sls_barrier (true));
350
+ switch_to_section (save_text_section);
351
+}
352
+
353
+/* Emit shared BLR stubs for the current compilation unit.
354
+ Over the course of compiling this unit we may have converted some BLR
355
+ instructions to a BL to a shared stub function. This is where we emit those
356
+ stub functions.
357
+ This function is for the stubs shared between different functions in this
358
+ compilation unit. We share when optimizing for size instead of speed.
359
+
360
+ This function is called through the TARGET_ASM_FILE_END hook. */
361
+void
362
+aarch64_sls_emit_shared_blr_thunks (FILE *out_file)
363
+{
364
+ if (! aarch64_sls_shared_thunks_needed)
365
+ return;
366
+
367
+ for (int regnum = 0; regnum < 30; ++regnum)
368
+ {
369
+ tree decl = aarch64_sls_shared_thunks[regnum];
370
+ if (!decl)
371
+ continue;
372
+
373
+ const char *name = indirect_symbol_names[regnum];
374
+ switch_to_section (get_named_section (decl, NULL, 0));
375
+ ASM_OUTPUT_ALIGN (out_file, 2);
376
+ targetm.asm_out.globalize_label (out_file, name);
377
+ /* Only emits if the compiler is configured for an assembler that can
378
+ handle visibility directives. */
379
+ targetm.asm_out.assemble_visibility (decl, VISIBILITY_HIDDEN);
380
+ ASM_OUTPUT_TYPE_DIRECTIVE (out_file, name, "function");
381
+ ASM_OUTPUT_LABEL (out_file, name);
382
+ aarch64_sls_emit_function_stub (out_file, regnum);
383
+ /* Use the most conservative target to ensure it can always be used by any
384
+ function in the translation unit. */
385
+ asm_fprintf (out_file, "\tdsb\tsy\n\tisb\n");
386
+ ASM_DECLARE_FUNCTION_SIZE (out_file, name, decl);
387
+ }
388
+}
389
+
390
+/* Implement TARGET_ASM_FILE_END. */
391
+void
392
+aarch64_asm_file_end ()
393
+{
394
+ aarch64_sls_emit_shared_blr_thunks (asm_out_file);
395
+ /* Since this function will be called for the ASM_FILE_END hook, we ensure
396
+ that what would be called otherwise (e.g. `file_end_indicate_exec_stack`
397
+ for FreeBSD) still gets called. */
398
+#ifdef TARGET_ASM_FILE_END
399
+ TARGET_ASM_FILE_END ();
400
+#endif
401
+}
402
+
403
+const char *
404
+aarch64_indirect_call_asm (rtx addr)
405
+{
406
+ gcc_assert (REG_P (addr));
407
+ if (aarch64_harden_sls_blr_p ())
408
+ {
409
+ rtx stub_label = aarch64_sls_create_blr_label (REGNO (addr));
410
+ output_asm_insn ("bl\t%0", &stub_label);
411
+ }
412
+ else
413
+ output_asm_insn ("blr\t%0", &addr);
414
+ return "";
415
+}
416
+
417
/* Target-specific selftests. */
418
419
#if CHECKING_P
420
421
#define TARGET_RUN_TARGET_SELFTESTS selftest::aarch64_run_selftests
422
#endif /* #if CHECKING_P */
423
424
+#undef TARGET_ASM_FILE_END
425
+#define TARGET_ASM_FILE_END aarch64_asm_file_end
426
+
427
+#undef TARGET_ASM_FUNCTION_EPILOGUE
428
+#define TARGET_ASM_FUNCTION_EPILOGUE aarch64_sls_emit_blr_function_thunks
429
+
430
struct gcc_target targetm = TARGET_INITIALIZER;
431
432
#include "gt-aarch64.h"
433
Index: gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.h
434
===================================================================
435
--- gcc-7.5.0+r278197.orig/gcc/config/aarch64/aarch64.h
436
+++ gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.h
437
438
#define GP_REGNUM_P(REGNO) \
439
(((unsigned) (REGNO - R0_REGNUM)) <= (R30_REGNUM - R0_REGNUM))
440
441
+/* Registers known to be preserved over a BL instruction. This consists of the
442
+ GENERAL_REGS without x16, x17, and x30. The x30 register is changed by the
443
+ BL instruction itself, while the x16 and x17 registers may be used by
444
+ veneers which can be inserted by the linker. */
445
+#define STUB_REGNUM_P(REGNO) \
446
+ (GP_REGNUM_P (REGNO) \
447
+ && (REGNO) != R16_REGNUM \
448
+ && (REGNO) != R17_REGNUM \
449
+ && (REGNO) != R30_REGNUM) \
450
+
451
#define FP_REGNUM_P(REGNO) \
452
(((unsigned) (REGNO - V0_REGNUM)) <= (V31_REGNUM - V0_REGNUM))
453
454
455
{
456
NO_REGS,
457
TAILCALL_ADDR_REGS,
458
+ STUB_REGS,
459
GENERAL_REGS,
460
STACK_REG,
461
POINTER_REGS,
462
463
{ \
464
"NO_REGS", \
465
"TAILCALL_ADDR_REGS", \
466
+ "STUB_REGS", \
467
"GENERAL_REGS", \
468
"STACK_REG", \
469
"POINTER_REGS", \
470
471
{ \
472
{ 0x00000000, 0x00000000, 0x00000000 }, /* NO_REGS */ \
473
{ 0x0004ffff, 0x00000000, 0x00000000 }, /* TAILCALL_ADDR_REGS */\
474
+ { 0x3ffcffff, 0x00000000, 0x00000000 }, /* STUB_REGS */ \
475
{ 0x7fffffff, 0x00000000, 0x00000003 }, /* GENERAL_REGS */ \
476
{ 0x80000000, 0x00000000, 0x00000000 }, /* STACK_REG */ \
477
{ 0xffffffff, 0x00000000, 0x00000003 }, /* POINTER_REGS */ \
478
479
struct aarch64_frame frame;
480
/* One entry for each hard register. */
481
bool reg_is_wrapped_separately[LAST_SAVED_REGNUM];
482
+ /* One entry for each general purpose register. */
483
+ rtx call_via[SP_REGNUM];
484
} machine_function;
485
#endif
486
487
Index: gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.md
488
===================================================================
489
--- gcc-7.5.0+r278197.orig/gcc/config/aarch64/aarch64.md
490
+++ gcc-7.5.0+r278197/gcc/config/aarch64/aarch64.md
491
492
)
493
494
(define_insn "*call_reg"
495
- [(call (mem:DI (match_operand:DI 0 "register_operand" "r"))
496
+ [(call (mem:DI (match_operand:DI 0 "register_operand" "Ucr"))
497
(match_operand 1 "" ""))
498
(use (match_operand 2 "" ""))
499
(clobber (reg:DI LR_REGNUM))]
500
""
501
- "blr\\t%0"
502
+ "* return aarch64_indirect_call_asm (operands[0]);"
503
[(set_attr "type" "call")]
504
)
505
506
507
508
(define_insn "*call_value_reg"
509
[(set (match_operand 0 "" "")
510
- (call (mem:DI (match_operand:DI 1 "register_operand" "r"))
511
+ (call (mem:DI (match_operand:DI 1 "register_operand" "Ucr"))
512
(match_operand 2 "" "")))
513
(use (match_operand 3 "" ""))
514
(clobber (reg:DI LR_REGNUM))]
515
""
516
- "blr\\t%1"
517
+ "* return aarch64_indirect_call_asm (operands[1]);"
518
[(set_attr "type" "call")]
519
520
)
521
Index: gcc-7.5.0+r278197/gcc/config/aarch64/constraints.md
522
===================================================================
523
--- gcc-7.5.0+r278197.orig/gcc/config/aarch64/constraints.md
524
+++ gcc-7.5.0+r278197/gcc/config/aarch64/constraints.md
525
526
(define_register_constraint "Ucs" "TAILCALL_ADDR_REGS"
527
"@internal Registers suitable for an indirect tail call")
528
529
+(define_register_constraint "Ucr"
530
+ "aarch64_harden_sls_blr_p () ? STUB_REGS : GENERAL_REGS"
531
+ "@internal Registers to be used for an indirect call.
532
+ This is usually the general registers, but when we are hardening against
533
+ Straight Line Speculation we disallow x16, x17, and x30 so we can use
534
+ indirection stubs. These indirection stubs cannot use the above registers
535
+ since they will be reached by a BL that may have to go through a linker
536
+ veneer.")
537
+
538
(define_register_constraint "w" "FP_REGS"
539
"Floating point and SIMD vector registers.")
540
541
Index: gcc-7.5.0+r278197/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
542
===================================================================
543
--- /dev/null
544
+++ gcc-7.5.0+r278197/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
545
546
+/* { dg-additional-options "-mharden-sls=blr -save-temps" } */
547
+/* Ensure that the SLS hardening of BLR leaves no BLR instructions.
548
+ We only test that all BLR instructions have been removed, not that the
549
+ resulting code makes sense. */
550
+typedef int (foo) (int, int);
551
+typedef void (bar) (int, int);
552
+struct sls_testclass {
553
+ foo *x;
554
+ bar *y;
555
+ int left;
556
+ int right;
557
+};
558
+
559
+/* We test both RTL patterns for a call which returns a value and a call which
560
+ does not. */
561
+int blr_call_value (struct sls_testclass x)
562
+{
563
+ int retval = x.x(x.left, x.right);
564
+ if (retval % 10)
565
+ return 100;
566
+ return 9;
567
+}
568
+
569
+int blr_call (struct sls_testclass x)
570
+{
571
+ x.y(x.left, x.right);
572
+ if (x.left % 10)
573
+ return 100;
574
+ return 9;
575
+}
576
+
577
+/* { dg-final { scan-assembler-not {\tblr\t} } } */
578
+/* { dg-final { scan-assembler {\tbr\tx[0-9][0-9]?} } } */
579