Here is a neat Python pattern I have observed recently:
import re from functools import partial, reduce def enumify(s: str) -> str: return reduce( lambda x, f: f(x), [partial(re.sub, r"[\(\)\.\,\!]", ""), partial(re.sub, r"[\s\-\/]", "_"), str.upper], s, )
In this example, the function will take some string and transform it into a string in the expected format for an enum name:
>>> enumify("Classification, 1") 'CLASSIFICATION_1' >>> enumify("classification (other)") 'CLASSIFICATION_OTHER'
Python’s reduce
is used to create a “pipeline” of string transformations. As
the first argument, the lambda
; a reducing function that takes the
accumulating value and a function and applies the function to that value.
The second argument to reduce
is expected to be an iterable; we pass to
reduce
the sequence of functions to apply to the accumulating value in
succession. Note the use of partial
with re.sub
to make it a string
transforming function with singular arity.
Lasly, reduce
optionally accepts an initial accumulator value; here it is the
input to our “pipeline”. However, an initial value which is of type expected by
the sequence of input functions is necessary for this pattern.
We could generalize the pattern into its own function. It may look something like this:
def pipeline(input, *fs): return reduce(lambda x, f: f(x), fs, input) def enumify(s: str) -> str: return pipeline( s, partial(re.sub, r"[\(\)\.\,\!]", ""), partial(re.sub, r"[\s\-\/]", "_"), str.upper )
This may be easier to reason about in the enumify
function and works exactly
as before.
This pattern is inspired by Clojure’s threading macro.