Programs are structured as a sequence of instructions.
Uses destructive updates for computations.
Remember — All programs transform data into something else.
Functional Programming:
Programs are structured as function compositions.
Destructive updates aren’t allowed.
Everything is built from expressions.
Functions
Example — You can’t do x = 1 + y, instead you have to use functions.
Definition — Mathematical Function: A special type of relation.
Relation: A subset of the cross product of two sets.
Related Notes — CS1300: RelationsProperties of Relations:
If you cross a set with itself (e.g., int cross int, take a subset), you can define special properties:
Reflexivity
Symmetric
Transitive:A \to B \to C \to A
Antisymmetry: A relation is antisymmetric when if A \to B and B \to A, they have to be equal. (e.g., \le is antisymmetric).
Mappings:
Injective (One-to-One): Distinct elements of domain map to distinct elements of codomain.
Surjective (Onto): Every element in codomain has a preimage.
Bijective: Both injective and surjective. Bijectives have inverses; the inverses are also functions.
Functions are injective. They can be one-to-one or many-to-one.
Functional Equations
This is only a function if you provide domain and codomain.
f(x) = x + 1
Z \to Z
Z: Set of all integers.
This doesn’t change the input variable; it transforms it.
Functional languages don’t do destructive updates.
A program takes input and transforms it without changing the original state. We call functions on functions on functions.
A program, rather than being a list, now just looks like
Mechanism:Always requires separate hardware (e.g., multi-core processors or distributed across multiple machines).
Goal: Throughput and performance.
Example: Breaking a huge dataset into 100 chunks, feeding them to 100 separate computers, and recombining the results to reduce time complexity.
Clarification: Taxonomy of Parallelism
SIMD (Single Instruction, Multiple Data): One operation applied to many data points (like GPU processing).
MIMD (Multiple Instruction, Multiple Data): Independent processors doing different things on different data.
Shared Memory vs. Distributed Memory: Whether all processors access one RAM stick (easy but race-prone) or have their own private RAM (requires message passing).
The Problem: Shared Memory
Standard programming uses Shared State (e.g., Producer-Consumer models reading/writing the same variable).
Risk: Race conditions are common and nightmarishly hard to debug.
Hardware Shift: Modified Moore’s Law implies we can’t just get faster clocks; we must go Multi-core/Distributed. Imperative languages struggle here because “Java threads” (originally fake/time-sliced) are now real parallel threads colliding over memory.
The Solution: Message Passing
In functional languages, multithreading is usually done via Message Passing.
Why? Since you cannot modify state (immutability), threads cannot fight over who updates a variable. They simply send values to each other.
Result: Functional languages are naturally safer for concurrency.
SML Specifics:
SML has a specific dialect called Concurrent ML (CML) for this.
In CML, instead of locking a variable x, you would have a “Channel”. One thread sends a value into the channel, and another thread receives it. It looks very similar to Go routines or Erlang actors.
Modern Examples:
Scala: An OOP language that is heavily functional (compiles to Java bytecode) and popular for this exact reason.