3 {-

4 Note [Exitification]

5 ~~~~~~~~~~~~~~~~~~~~

7 This module implements Exitification. The goal is to pull as much code out of

8 recursive functions as possible, as the simplifier is better at inlining into

9 call-sites that are not in recursive functions.

11 Example:

13 let t = foo bar

14 joinrec go 0 x y = t (x*x)

15 go (n-1) x y = jump go (n-1) (x+y)

16 in …

18 We’d like to inline `t`, but that does not happen: Because t is a thunk and is

19 used in a recursive function, doing so might lose sharing in general. In

20 this case, however, `t` is on the _exit path_ of `go`, so called at most once.

21 How do we make this clearly visible to the simplifier?

23 A code path (i.e., an expression in a tail-recursive position) in a recursive

24 function is an exit path if it does not contain a recursive call. We can bind

25 this expression outside the recursive function, as a join-point.

27 Example result:

29 let t = foo bar

30 join exit x = t (x*x)

31 joinrec go 0 x y = jump exit x

32 go (n-1) x y = jump go (n-1) (x+y)

33 in …

35 Now `t` is no longer in a recursive function, and good things happen!

36 -}

38 import GhcPrelude

39 import Var

40 import Id

41 import IdInfo

42 import CoreSyn

43 import CoreUtils

44 import State

45 import Unique

46 import VarSet

47 import VarEnv

48 import CoreFVs

49 import FastString

50 import Type

55 -- | Traverses the AST, simply to find all joinrecs and call 'exitify' on them.

58 where

62 in_scope_toplvl = emptyInScopeSet `extendInScopeSetList` bindersOfBinds binds

94 -- | Given a recursive group of a joinrec, identifies “exit paths” and binds them as

95 -- join-points outside the joinrec.

97 exitify in_scope pairs =

99 where

103 -- We need the set of free variables of many subexpressions here, so

104 -- annotate the AST with them

105 -- see Note [Calculating free variables]

108 -- Which are the recursive calls?

113 -- go past the lambdas of the join point

115 body' <- go args body

119 -- main working function. Goes through the RHS (tail-call positions only),

120 -- checks if there are no more recursive calls, if so, abstracts over

121 -- variables bound on the way and lifts it out as a join point.

122 --

123 -- It uses a state monad to keep track of floated binds

128 go captured ann_e

129 -- Do not touch an expression that is already a join jump where all arguments

130 -- are captured variables. See Note [Idempotency]

131 -- But _do_ float join jumps with interesting arguments.

132 -- See Note [Jumps can be interesting]

134 , isJoinId f

138 -- Do not touch a boring expression (see Note [Interesting expression])

141 -- Cannot float out if local join points are used, as

142 -- we cannot abstract over them

145 -- We have something to float out!

147 -- Assemble the RHS of the exit join point

149 ty = exprType rhs

151 -- Remember this binding under a suitable name

153 -- And jump to it from here

155 where

156 -- An exit expression has no recursive calls

157 is_exit = disjointVarSet fvs recursive_calls

159 -- Used to detect exit expressoins that are already proper exit jumps

163 -- An interesting exit expression has free, non-imported

164 -- variables from outside the recursive group

165 -- See Note [Interesting expression]

168 -- The possible arguments of this exit join point

171 -- We cannot abstract over join points

174 e = deAnnotate ann_e

178 -- Case right hand sides are in tail-call position

186 -- join point, RHS and body are in tail-call position

187 | AnnNonRec j rhs <- ann_bind

195 -- rec join point, RHSs and body are in tail-call position

196 | AnnRec pairs <- ann_bind

208 -- normal Let, only the body is in tail-call position

209 | otherwise

217 -- Picks a new unique, which is disjoint from

218 -- * the free variables of the whole joinrec

219 -- * any bound variables (captured)

220 -- * any exit join points created so far.

223 fs <- get

225 `extendInScopeSet` exit_id_tmpl -- just cosmetics

227 where

229 `asJoinId` join_arity

230 `setIdOccInfo` exit_occ_info

232 -- See Note [Do not inline exit join points]

233 exit_occ_info =

241 -- Pick a suitable name

242 v <- mkExitJoinId in_scope ty join_arity

243 fs <- get

245 return v

250 {-

251 Note [Interesting expression]

252 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

254 We do not want this to happen:

256 joinrec go 0 x y = x

257 go (n-1) x y = jump go (n-1) (x+y)

258 in …

259 ==>

260 join exit x = x

261 joinrec go 0 x y = jump exit x

262 go (n-1) x y = jump go (n-1) (x+y)

263 in …

265 because the floated exit path (`x`) is simply a parameter of `go`; there are

266 not useful interactions exposed this way.

268 Neither do we want this to happen

270 joinrec go 0 x y = x+x

271 go (n-1) x y = jump go (n-1) (x+y)

272 in …

273 ==>

274 join exit x = x+x

275 joinrec go 0 x y = jump exit x

276 go (n-1) x y = jump go (n-1) (x+y)

277 in …

279 where the floated expression `x+x` is a bit more complicated, but still not

280 intersting.

282 Expressions are interesting when they move an occurrence of a variable outside

283 the recursive `go` that can benefit from being obviously called once, for example:

284 * a local thunk that can then be inlined (see example in note [Exitification])

285 * the parameter of a function, where the demand analyzer then can then

286 see that it is called at most once, and hence improve the function’s

287 strictness signature

289 So we only hoist an exit expression out if it mentiones at least one free,

290 non-imported variable.

292 Note [Jumps can be interesting]

293 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

295 A jump to a join point can be interesting, if its arguments contain free

296 non-exported variables (z in the following example):

298 joinrec go 0 x y = jump j (x+z)

299 go (n-1) x y = jump go (n-1) (x+y)

300 in …

301 ==>

302 join exit x y = jump j (x+z)

303 joinrec go 0 x y = jump exit x

304 go (n-1) x y = jump go (n-1) (x+y)

307 The join point itself can be interesting, even if none if

308 its arguments are (assume `g` to be an imported function that, on its own, does

309 not make this interesting):

311 join j y = map f y

312 joinrec go 0 x y = jump j (map g x)

313 go (n-1) x y = jump go (n-1) (x+y)

314 in …

316 Here, `j` would not be inlined because we do not inline something that looks

317 like an exit join point (see Note [Do not inline exit join points]).

319 But after exitification we have

321 join j y = map f y

322 join exit x = jump j (map g x)

323 joinrec go 0 x y = jump j (map g x)

324 go (n-1) x y = jump go (n-1) (x+y)

325 in …

327 and now we can inline `j` and this will allow `map/map` to fire.

330 Note [Idempotency]

331 ~~~~~~~~~~~~~~~~~~

333 We do not want this to happen, where we replace the floated expression with

334 essentially the same expression:

336 join exit x = t (x*x)

337 joinrec go 0 x y = jump exit x

338 go (n-1) x y = jump go (n-1) (x+y)

339 in …

340 ==>

341 join exit x = t (x*x)

342 join exit' x = jump exit x

343 joinrec go 0 x y = jump exit' x

344 go (n-1) x y = jump go (n-1) (x+y)

345 in …

347 So when the RHS is a join jump, and all of its arguments are captured variables,

348 then we leave it in place.

350 Note that `jump exit x` in this example looks interesting, as `exit` is a free

351 variable. Therefore, idempotency does not simply follow from floating only

352 interesting expressions.

354 Note [Calculating free variables]

355 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

357 We have two options where to annotate the tree with free variables:

359 A) The whole tree.

360 B) Each individual joinrec as we come across it.

362 Downside of A: We pay the price on the whole module, even outside any joinrecs.

363 Downside of B: We pay the price per joinrec, possibly multiple times when

364 joinrecs are nested.

366 Further downside of A: If the exitify function returns annotated expressions,

367 it would have to ensure that the annotations are correct.

370 Note [Do not inline exit join points]

371 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

373 When we have

375 let t = foo bar

376 join exit x = t (x*x)

377 joinrec go 0 x y = jump exit x

378 go (n-1) x y = jump go (n-1) (x+y)

379 in …

381 we do not want the simplifier to simply inline `exit` back in (which it happily

382 would).

384 To prevent this, we need to recognize exit join points, and then disable

385 inlining.

387 Exit join points, recognizeable using `isExitJoinId` are join points with an

388 occurence in a recursive group, and can be recognized using `isExitJoinId`.

389 This function detects joinpoints with `occ_in_lam (idOccinfo id) == True`,

390 because the lambdas of a non-recursive join point are not considered for

391 `occ_in_lam`. For example, in the following code, `j1` is /not/ marked

392 occ_in_lam, because `j2` is called only once.

394 join j1 x = x+1

395 join j2 y = join j1 (y+2)

397 We create exit join point ids with such an `OccInfo`, see `exit_occ_info`.

399 To prevent inlining, we check for that in `preInlineUnconditionally` directly.

400 For `postInlineUnconditionally` and unfolding-based inlining, the function

401 `simplLetUnfolding` simply gives exit join points no unfolding, which prevents

402 this kind of inlining.

404 Note [Placement of the exitification pass]

405 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

407 I (Joachim) experimented with multiple positions for the Exitification pass in

408 the Core2Core pipeline:

410 A) Before the `simpl_phases`

411 B) Between the `simpl_phases` and the "main" simplifier pass

412 C) After demand_analyser

413 D) Before the final simplification phase

415 Here is the table (this is without inlining join exit points in the final

416 simplifier run):

418 Program | Allocs | Instrs

419 | ABCD.log A.log B.log C.log D.log | ABCD.log A.log B.log C.log D.log

420 ----------------|---------------------------------------------------|-------------------------------------------------

421 fannkuch-redux | -99.9% +0.0% -99.9% -99.9% -99.9% | -3.9% +0.5% -3.0% -3.9% -3.9%

422 fasta | -0.0% +0.0% +0.0% -0.0% -0.0% | -8.5% +0.0% +0.0% -0.0% -8.5%

423 fem | 0.0% 0.0% 0.0% 0.0% +0.0% | -2.2% -0.1% -0.1% -2.1% -2.1%

424 fish | 0.0% 0.0% 0.0% 0.0% +0.0% | -3.1% +0.0% -1.1% -1.1% -0.0%

425 k-nucleotide | -91.3% -91.0% -91.0% -91.3% -91.3% | -6.3% +11.4% +11.4% -6.3% -6.2%

426 scs | -0.0% -0.0% -0.0% -0.0% -0.0% | -3.4% -3.0% -3.1% -3.3% -3.3%

427 simple | -6.0% 0.0% -6.0% -6.0% +0.0% | -3.4% +0.0% -5.2% -3.4% -0.1%

428 spectral-norm | -0.0% 0.0% 0.0% -0.0% +0.0% | -2.7% +0.0% -2.7% -5.4% -5.4%

429 ----------------|---------------------------------------------------|-------------------------------------------------

430 Min | -95.0% -91.0% -95.0% -95.0% -95.0% | -8.5% -3.0% -5.2% -6.3% -8.5%

431 Max | +0.2% +0.2% +0.2% +0.2% +1.5% | +0.4% +11.4% +11.4% +0.4% +1.5%

432 Geometric Mean | -4.7% -2.1% -4.7% -4.7% -4.6% | -0.4% +0.1% -0.1% -0.3% -0.2%

434 Position A is disqualified, as it does not get rid of the allocations in

435 fannkuch-redux.

436 Position A and B are disqualified because it increases instructions in k-nucleotide.

437 Positions C and D have their advantages: C decreases allocations in simpl, but D instructions in fasta.

439 Assuming we have a budget of _one_ run of Exitification, then C wins (but we

440 could get more from running it multiple times, as seen in fish).

442 -}