MyFunc2 xs = hxs (hxs `seq` (head $ tail xs)) It takes two arguments, forces the first to be evaluated, then returns the second, e.g. If you really want to force one function call before the other, you can use the seq function. There are some nuances due to lazy evaluation, but this is essentially how the compiler constructs evaluation trees. First xs must be evaluated, at least partially, but after that the two branches can be evaluated in parallel. In order to evaluate a node, all nodes below it have to be evaluated first, but different branches can be evaluated in parallel. The point is that pure functions, because they have no side effects, don't have to be evaluated in a given order until their results are interdependent.Īnother way to look at the above function is as a graph: myFunc It could even choose to evaluate them in parallel, or on different machines. However, the compiler can choose which order to execute head xs and head $ tail xs in since they are not dependent on each other, but it can't do the addition without having both of the other results. In this case, you can't get the output of myFunc xs without evaluating head xs, head $ tail xs and ( ). You can write a function whose arguments depend on the evaluation of another function: - Ads the first two elements of a list together So, there is no magic going on here, just normal function calls. Step 1 (converting the first >=) example2 = Step 0 (what we already have) example2 :: To help drive home the point that Monads are not usually impure, lets expand the >= calls in the second list example, using the Monad definition for lists. Expanding the >= in the second list example So all definitions of > must be equivalent to m > n = m >= (\_ -> n). The best way to understand do notation is to first understand >= and return since, as I said, that's what the compiler transforms do notation into.Īs a side-note, > is just the same as >=, it just ignores the "result" of it's left argument (although it preserves the "context" or "structure"). Here are the three examples from above written with do notation:įirst list example biggerThanTen :: Int -> BoolĮxample :: String - String is a synonym for, by the way The compiler actually takes things written in do notation and converts them to >= and > through a process called "desugaring" (since it removes the syntactic sugar). Do notationĭo notation is actually exactly equivalent to using >= and > (there is actually one other thing involved that takes care of pattern match failures, but that is irrelevant to the discussion at hand). As a result, there is never a reason to call two functions "one after another" in the fashion described in the question (that is, where the calls are totally independent from each other). The main takeaway here is that evaluation order (in the absence of side effects like IO and ignoring bottoms) doesn't affect the ultimate meaning of code in Haskell (other than possible differences in efficiency, but that is another topic). This becomes more apparent with bigger examples though: example2 :: You can sort of see this with the list Monad example. I should point out, though, that things that are sequenced in this way aren't necessarily evaluated in the order that they are written. We ignore the specific values, but we use the structure of the result (in the second example, the structure would be the fact that the resulting list from filter biggerThanTen has 5 elements). Note that even here we aren't really ignoring the result. Another example would be the list Monad (which could be thought of as representing a nondeterministic computation): biggerThanTen :: Int -> BoolĮxample = filter biggerThanTen > return 'a' - This evaluates to "aaaaa" Main = putStrLn "Hello, " > putStrLn "world" Now, if we do have side effects, such as if we are working inside a Monad or Applicative, this could be useful, but we aren't truly ignoring the result since there is context being passed implicitly. It is possible that seq x y will evaluate x and then y, and then give you y as its result, but this evaluation order isn't guaranteed. In a side effect free setting, calling a function and then ignoring its result is exactly the same as doing nothing for the amount of time it takes to call that function (setting aside memory usage). This is because there is no reason to do this. There is no way to do what you describe, if the functions are totally independent and you don't use the result of one when you call the other.
0 Comments
Leave a Reply. |