firing squad
This commit is contained in:
parent
967eb93aa8
commit
0a2abe01d8
@ -1,4 +0,0 @@
|
|||||||
# Used by "mix format"
|
|
||||||
[
|
|
||||||
inputs: ["{mix,.formatter}.exs", "{config,lib,test}/**/*.{ex,exs}"]
|
|
||||||
]
|
|
||||||
21
README.md
21
README.md
@ -1,21 +0,0 @@
|
|||||||
# Til
|
|
||||||
|
|
||||||
**TODO: Add description**
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
If [available in Hex](https://hex.pm/docs/publish), the package can be installed
|
|
||||||
by adding `pl` to your list of dependencies in `mix.exs`:
|
|
||||||
|
|
||||||
```elixir
|
|
||||||
def deps do
|
|
||||||
[
|
|
||||||
{:pl, "~> 0.1.0"}
|
|
||||||
]
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
Documentation can be generated with [ExDoc](https://github.com/elixir-lang/ex_doc)
|
|
||||||
and published on [HexDocs](https://hexdocs.pm). Once published, the docs can
|
|
||||||
be found at <https://hexdocs.pm/pl>.
|
|
||||||
|
|
||||||
318
g1
318
g1
@ -1,318 +0,0 @@
|
|||||||
```
|
|
||||||
defmodule Tdd do
|
|
||||||
@moduledoc """
|
|
||||||
Ternary decision diagram, used for representing set-theoritic types, akin to cduce.
|
|
||||||
There are 2 types of nodes:
|
|
||||||
- terminal nodes (true, false)
|
|
||||||
- variable nodes
|
|
||||||
|
|
||||||
variable nodes consist of:
|
|
||||||
- the variable being tested
|
|
||||||
- yes: id of the node if the result of the test is true
|
|
||||||
- no: id of the node if the result of the test is false
|
|
||||||
- dc: id of the node if the result of the test is irrelevant for the current operation
|
|
||||||
|
|
||||||
the TDD needs to be ordered and reduced (ROBDD)
|
|
||||||
- 'ordered' if different variables appear in the same order on all paths from the root.
|
|
||||||
- 'reduced' if the following two rules have been applied to its graph:
|
|
||||||
- Merge any isomorphic subgraphs.
|
|
||||||
- Eliminate any node whose two children are isomorphic.
|
|
||||||
|
|
||||||
Working notes:
|
|
||||||
- structure of the ordered variables:
|
|
||||||
Im thinking of structuring all possible types inside 1 TDD, in contrast to cduce, which uses a `desrc` structure that contains several TDDs (one for each domain, like ints, atoms, functions, etc.), and descr is a union between them.
|
|
||||||
For this, I need to come up with a variable structure that'll be ordered.
|
|
||||||
My set types will need to represent types like: atoms, strings, ints, maps, tuples, functions, kinds?
|
|
||||||
Moreso, those types themselves consist of smaller subsets of types like:
|
|
||||||
- int < 10
|
|
||||||
- int in [1, 2, 3]
|
|
||||||
- string > "prefix_"
|
|
||||||
- atom == false
|
|
||||||
- atom == false or atom == true or atom == nil
|
|
||||||
- map == %{"id" => string} and %{string => any | nil}
|
|
||||||
- etc.
|
|
||||||
Dont know how to represent them and make them ordered.
|
|
||||||
- node cache:
|
|
||||||
I don't yet know what it should contain, I suspect ids of nodes (TDDs) after reduction. This way a comparison between 2 types is just a pointer (id) check in the node cache. But not yet sure.
|
|
||||||
- reduction rules: not sure how to approach them
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def node(elem, yes, no, dc = _dont_care) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def sum(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def intersect(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def negate(one, two) do
|
|
||||||
end
|
|
||||||
end
|
|
||||||
```
|
|
||||||
Please help me refine the theory for impementing set-theoritic types with this
|
|
||||||
|
|
||||||
|
|
||||||
========================================================================================================================
|
|
||||||
|
|
||||||
|
|
||||||
Okay, this is a fascinating and ambitious project! Building a TDD-based type system like CDuce is a significant undertaking. Let's refine the theory.
|
|
||||||
|
|
||||||
You've got the core ideas right: terminals, variable nodes, and the need for ordering and reduction. The `dc` (don't care) child is the main differentiator from BDDs and key to TDDs' power in certain applications.
|
|
||||||
|
|
||||||
Let's break down your working notes and expand on them.
|
|
||||||
|
|
||||||
**1. Structure of Ordered Variables (The Core Challenge)**
|
|
||||||
|
|
||||||
Your intuition to have a single TDD for all types is interesting. CDuce uses one TDD *per domain* (atoms, integers, etc.) and then a `descr` (descriptor) which is a union of these. A single TDD means your variable ordering must be global and able to discriminate across all these domains first, then within them.
|
|
||||||
|
|
||||||
**Proposed Hierarchical Variable Ordering:**
|
|
||||||
|
|
||||||
The variables in your TDD will represent *predicates* about a value. The order is crucial.
|
|
||||||
|
|
||||||
* **Level 0: Base Type Discriminators:** These variables test the fundamental Erlang/Elixir type of a value.
|
|
||||||
* `is_atom?`
|
|
||||||
* `is_integer?`
|
|
||||||
* `is_float?`
|
|
||||||
* `is_string?` (Elixir strings are UTF-8 binaries)
|
|
||||||
* `is_binary?` (General binaries)
|
|
||||||
* `is_list?`
|
|
||||||
* `is_tuple?`
|
|
||||||
* `is_map?`
|
|
||||||
* `is_function?`
|
|
||||||
* `is_pid?`
|
|
||||||
* `is_reference?`
|
|
||||||
* *Order*: Define a canonical order for these (e.g., alphabetical, or by commonality). Let's say `is_atom` < `is_binary` < `is_float` < ...
|
|
||||||
|
|
||||||
* **Level 1+: Type-Specific Predicates:** Once a base type is determined (e.g., the `yes` branch of `is_integer?` is taken), subsequent variables test properties specific to that type.
|
|
||||||
|
|
||||||
* **For Atoms:**
|
|
||||||
* `value_is(specific_atom)`: e.g., `value_is(true)`, `value_is(false)`, `value_is(nil)`, `value_is(:foo)`.
|
|
||||||
* *Order*: Canonical order of the atoms themselves (e.g., `false` < `nil` < `true` < other atoms alphabetically).
|
|
||||||
* This implies a potentially large number of variables if you want to distinguish many specific atoms. Often, you care about a few (booleans, nil) and "any other atom".
|
|
||||||
|
|
||||||
* **For Integers:**
|
|
||||||
* `value < N`, `value == N` (less common as a direct variable, usually derived), `value_in_range(A,B)`.
|
|
||||||
* CDuce uses a finite partitioning of the integer line. For example, variables could be `value < 0`, `value < 10`, `value < MAX_INT_SMALL_ENOUGH_FOR_OPTIMIZATION`.
|
|
||||||
* *Order*: `value < N1` < `value < N2` if `N1 < N2`.
|
|
||||||
|
|
||||||
* **For Strings/Binaries:**
|
|
||||||
* `length_is(L)` or `length_in_range(L1,L2)`
|
|
||||||
* `prefix_is(prefix_string)`
|
|
||||||
* `matches_regex(regex_pattern)` (Regexes can be complex to integrate directly as simple TDD variables due to their expressive power. CDuce has specific handling for regular expression types on sequences).
|
|
||||||
* *Order*: By length, then by prefix lexicographically, etc.
|
|
||||||
|
|
||||||
* **For Tuples:**
|
|
||||||
* `arity_is(N)`
|
|
||||||
* `element_at(Index)_is_type(TypeRef)`: This is recursive. `TypeRef` would be the ID of another TDD representing the type of the element.
|
|
||||||
* *Order*: `arity_is(0)` < `arity_is(1)` < ... then `element_at(0)_is_type(T)` < `element_at(1)_is_type(T')`. The `TypeRef` itself would need to be part of the variable ordering if you allow different type tests for the same element index.
|
|
||||||
|
|
||||||
* **For Maps:** This is the most complex.
|
|
||||||
* `has_key(KeyAtomOrString)`
|
|
||||||
* `value_for_key(Key)_is_type(TypeRef)`
|
|
||||||
* `all_keys_are_type(KeyTypeRef)`
|
|
||||||
* `all_values_are_type(ValueTypeRef)`
|
|
||||||
* CDuce has specialized structures for map patterns. You might need variables like `matches_map_pattern(PatternID)`.
|
|
||||||
* *Order*: By key name, then by tests on those keys.
|
|
||||||
|
|
||||||
* **For Functions:**
|
|
||||||
* `arity_is(N)`
|
|
||||||
* `accepts_args_types(ArgTypesTupleRef)`
|
|
||||||
* `returns_type(ReturnTypeRef)`
|
|
||||||
* *Order*: By arity, then by argument types, then return type.
|
|
||||||
|
|
||||||
**The "Variable" Itself:**
|
|
||||||
A "variable" in your TDD node won't be just a name; it'll be an identifier that maps to a specific predicate function and its parameters. E.g., `var_id = 101` might mean `fn val -> is_integer(val) and val < 10 end`. You'll need a global, ordered registry of these predicate variables.
|
|
||||||
|
|
||||||
**2. Node Cache (`unique_table`)**
|
|
||||||
|
|
||||||
Yes, this is crucial for the "reduced" property.
|
|
||||||
* It's a hash table mapping `(variable_id, yes_node_id, no_node_id, dc_node_id)` to a unique `node_id`.
|
|
||||||
* **Terminals:**
|
|
||||||
* `true_node_id` (e.g., 1): Represents the "any" type or universal set.
|
|
||||||
* `false_node_id` (e.g., 0): Represents the "empty" type or bottom type.
|
|
||||||
These are the leaves of your TDD.
|
|
||||||
* When you create a node, you first check this table. If an identical node (same variable and children) exists, you reuse its ID. Otherwise, you create a new one, assign it a new ID, and store it.
|
|
||||||
* This ensures that any isomorphic subgraphs are merged (Reduction Rule 1).
|
|
||||||
|
|
||||||
**3. Reduction Rules**
|
|
||||||
|
|
||||||
You've listed the ROBDD rules. Let's adapt them for TDDs:
|
|
||||||
|
|
||||||
* **Merge any isomorphic subgraphs:** Handled by the `unique_table` as described above. Two subgraphs are isomorphic if they are structurally identical (same variables in same places leading to same terminals or further isomorphic subgraphs).
|
|
||||||
* **Eliminate any node whose three children are isomorphic:** If `yes_id == no_id == dc_id` for a variable node `v`, then the test `v` is irrelevant for distinguishing the type further down this path. The node `(v, child_id, child_id, child_id)` should be replaced by `child_id`. This rule is applied *before* looking up/inserting into the `unique_table`.
|
|
||||||
|
|
||||||
**The Role of `dc` (Don't Care):**
|
|
||||||
|
|
||||||
The `dc` child is what makes it a TDD. It typically means the variable being tested is irrelevant or its value doesn't affect membership for a particular path or operation.
|
|
||||||
|
|
||||||
* **In Type Representation:**
|
|
||||||
* If you're representing the type "integer", and your current variable is `is_atom?`:
|
|
||||||
* `yes` child would be `false_node_id` (an integer is not an atom).
|
|
||||||
* `no` child would lead to further tests for integers (or `true_node_id` if "integer" is all you're specifying).
|
|
||||||
* `dc` child: This is tricky for pure representation. If the variable `is_atom?` *must* be evaluated, then `dc` might also be `false_node_id`.
|
|
||||||
* However, `dc` becomes powerful when combining types or dealing with partially specified types. If a type expression doesn't constrain variable `v`, its `dc` branch relative to `v` might be taken.
|
|
||||||
* Consider the type `any`. For any variable `v`, the node would be `(v, true_node_id, true_node_id, true_node_id)`. By reduction rule 2, this simplifies to just `true_node_id`.
|
|
||||||
* Consider the type `none` (empty set). For any variable `v`, node is `(v, false_node_id, false_node_id, false_node_id)`, reduces to `false_node_id`.
|
|
||||||
|
|
||||||
* **In Operations (`sum`, `intersect`, `negate`):** The `dc` child is heavily used in the `apply` algorithm.
|
|
||||||
When `apply(op, tdd1, tdd2)`:
|
|
||||||
* If `var(tdd1) == var(tdd2)`: Recurse on `(op, tdd1.yes, tdd2.yes)`, `(op, tdd1.no, tdd2.no)`, `(op, tdd1.dc, tdd2.dc)`.
|
|
||||||
* If `var(tdd1) < var(tdd2)` (in global order): `tdd2` doesn't explicitly test `var(tdd1)`. So, `var(tdd1)` is effectively "don't care" for `tdd2`.
|
|
||||||
The operation proceeds by testing `var(tdd1)`:
|
|
||||||
* `yes_child = apply(op, tdd1.yes, tdd2)`
|
|
||||||
* `no_child = apply(op, tdd1.no, tdd2)`
|
|
||||||
* `dc_child = apply(op, tdd1.dc, tdd2)` (This is where TDD differs significantly from BDD's apply).
|
|
||||||
The new node is `mknode(var(tdd1), yes_child, no_child, dc_child)`.
|
|
||||||
* Symmetric case for `var(tdd2) < var(tdd1)`.
|
|
||||||
|
|
||||||
**4. `mknode(variable_id, yes_id, no_id, dc_id)` function:**
|
|
||||||
|
|
||||||
This is your core constructor.
|
|
||||||
|
|
||||||
```elixir
|
|
||||||
defmodule Tdd do
|
|
||||||
# Store nodes in an array/map, IDs are indices/keys
|
|
||||||
# @nodes %{0 => :false_terminal, 1 => :true_terminal, ...}
|
|
||||||
# @unique_table %{{var_id, yes_id, no_id, dc_id} => node_id}
|
|
||||||
# @variables %{var_id => {predicate_fun, predicate_params, global_order_index}}
|
|
||||||
|
|
||||||
@true_id 1
|
|
||||||
@false_id 0
|
|
||||||
|
|
||||||
# Internal state for unique table, nodes, variable definitions
|
|
||||||
# This would typically be in a GenServer or an ETS table in a real system
|
|
||||||
# For simplicity, let's imagine it's passed around or module attributes are used carefully.
|
|
||||||
|
|
||||||
# Example variable IDs (you'd have a system for this)
|
|
||||||
# @var_is_atom 10
|
|
||||||
# @var_is_integer 11
|
|
||||||
# @var_int_lt_10 20
|
|
||||||
|
|
||||||
def get_true(), do: @true_id
|
|
||||||
def get_false(), do: @false_id
|
|
||||||
|
|
||||||
# Pseudo-code for mknode
|
|
||||||
def node(variable_id, yes_id, no_id, dc_id) do
|
|
||||||
# Reduction Rule 2: Eliminate node whose children are isomorphic
|
|
||||||
if yes_id == no_id and no_id == dc_id do
|
|
||||||
# If all children are the same, this variable provides no discrimination
|
|
||||||
# for this path. The node is redundant.
|
|
||||||
return yes_id
|
|
||||||
end
|
|
||||||
|
|
||||||
# Check unique_table (conceptually)
|
|
||||||
# entry = {variable_id, yes_id, no_id, dc_id}
|
|
||||||
# if Map.has_key?(@unique_table, entry) do
|
|
||||||
# return @unique_table[entry]
|
|
||||||
# else
|
|
||||||
# new_node_id = generate_new_id()
|
|
||||||
# @nodes = Map.put(@nodes, new_node_id, {variable_id, yes_id, no_id, dc_id})
|
|
||||||
# @unique_table = Map.put(@unique_table, entry, new_node_id)
|
|
||||||
# return new_node_id
|
|
||||||
# end
|
|
||||||
# Actual implementation would use a GenServer or ETS for state
|
|
||||||
:not_implemented_yet # Placeholder
|
|
||||||
end
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. Operations (`sum`, `intersect`, `negate`)**
|
|
||||||
|
|
||||||
These are built using a generic `apply` function, often memoized (using a `computed_table`).
|
|
||||||
|
|
||||||
`apply(op_lambda, tdd1_id, tdd2_id)`:
|
|
||||||
1. **Handle terminal cases:**
|
|
||||||
* `op_lambda.(@false_id, @false_id)`, `op_lambda.(@false_id, @true_id)`, etc.
|
|
||||||
* Some ops have identities/annihilators:
|
|
||||||
* Union: `T | true = true`, `T | false = T`.
|
|
||||||
* Intersection: `T & false = false`, `T & true = T`.
|
|
||||||
2. **Check `computed_table`:** `cache_key = {op_lambda_id, tdd1_id, tdd2_id}`. If result exists, return it.
|
|
||||||
3. **Recursive step:**
|
|
||||||
* Fetch node data for `tdd1_id` and `tdd2_id`. Let them be `N1`, `N2`.
|
|
||||||
* Let `v1 = N1.variable_id`, `v2 = N2.variable_id`.
|
|
||||||
* Compare `v1` and `v2` based on global variable order.
|
|
||||||
* **If `v1 == v2`**:
|
|
||||||
`res_yes = apply(op_lambda, N1.yes, N2.yes)`
|
|
||||||
`res_no = apply(op_lambda, N1.no, N2.no)`
|
|
||||||
`res_dc = apply(op_lambda, N1.dc, N2.dc)`
|
|
||||||
`result = mknode(v1, res_yes, res_no, res_dc)`
|
|
||||||
* **If `v1 < v2` (in order)**:
|
|
||||||
`res_yes = apply(op_lambda, N1.yes, tdd2_id)`
|
|
||||||
`res_no = apply(op_lambda, N1.no, tdd2_id)`
|
|
||||||
`res_dc = apply(op_lambda, N1.dc, tdd2_id)`
|
|
||||||
`result = mknode(v1, res_yes, res_no, res_dc)`
|
|
||||||
* **If `v1 > v2` (in order)**: (symmetric to above)
|
|
||||||
`res_yes = apply(op_lambda, tdd1_id, N2.yes)`
|
|
||||||
`res_no = apply(op_lambda, tdd1_id, N2.no)`
|
|
||||||
`res_dc = apply(op_lambda, tdd1_id, N2.dc)`
|
|
||||||
`result = mknode(v2, res_yes, res_no, res_dc)`
|
|
||||||
4. Store `result` in `computed_table` for `cache_key`.
|
|
||||||
5. Return `result`.
|
|
||||||
|
|
||||||
**`negate(tdd_id)`:**
|
|
||||||
This is simpler, an `apply_unary` operation.
|
|
||||||
1. If `tdd_id == @true_id`, return `@false_id`.
|
|
||||||
2. If `tdd_id == @false_id`, return `@true_id`.
|
|
||||||
3. Check `computed_table_for_negation`.
|
|
||||||
4. Fetch node `N`.
|
|
||||||
`res_yes = negate(N.yes)`
|
|
||||||
`res_no = negate(N.no)`
|
|
||||||
`res_dc = negate(N.dc)`
|
|
||||||
`result = mknode(N.variable_id, res_yes, res_no, res_dc)`
|
|
||||||
5. Store and return `result`.
|
|
||||||
|
|
||||||
**Concrete Example: Type `atom(:foo) | atom(:bar)`**
|
|
||||||
|
|
||||||
Variables (simplified order): `is_atom`, `atom_is_foo`, `atom_is_bar`.
|
|
||||||
1. TDD for `atom(:foo)`:
|
|
||||||
* `v_is_atom = mknode(:is_atom, v_atom_is_foo, @false_id, @false_id)`
|
|
||||||
* `v_atom_is_foo = mknode(:atom_is_foo, @true_id, @false_id, @false_id)`
|
|
||||||
(Assuming if not atom, or atom but not foo, it's false. `dc` is false as these tests are always relevant here).
|
|
||||||
2. TDD for `atom(:bar)`:
|
|
||||||
* `v_is_atom_2 = mknode(:is_atom, v_atom_is_bar, @false_id, @false_id)`
|
|
||||||
* `v_atom_is_bar = mknode(:atom_is_bar, @true_id, @false_id, @false_id)`
|
|
||||||
|
|
||||||
Now, `sum(tdd_foo, tdd_bar)` using logical OR (`||`):
|
|
||||||
Top variable is `:is_atom` for both.
|
|
||||||
`res_yes = apply(||, v_atom_is_foo, v_atom_is_bar)`
|
|
||||||
`res_no = apply(||, @false_id, @false_id)` -> `@false_id`
|
|
||||||
`res_dc = apply(||, @false_id, @false_id)` -> `@false_id`
|
|
||||||
|
|
||||||
Consider `apply(||, v_atom_is_foo, v_atom_is_bar)`:
|
|
||||||
Variables are `:atom_is_foo` and `:atom_is_bar`. Assume `:atom_is_foo < :atom_is_bar`.
|
|
||||||
Current var is `:atom_is_foo` (from `v_atom_is_foo`).
|
|
||||||
`ry = apply(||, v_atom_is_foo.yes, v_atom_is_bar)`
|
|
||||||
`= apply(||, @true_id, v_atom_is_bar)` -> `@true_id` (since `true || X = true`)
|
|
||||||
`rn = apply(||, v_atom_is_foo.no, v_atom_is_bar)`
|
|
||||||
`= apply(||, @false_id, v_atom_is_bar)` -> `v_atom_is_bar` (since `false || X = X`)
|
|
||||||
`rd = apply(||, v_atom_is_foo.dc, v_atom_is_bar)`
|
|
||||||
`= apply(||, @false_id, v_atom_is_bar)` -> `v_atom_is_bar`
|
|
||||||
|
|
||||||
So, `res_yes = mknode(:atom_is_foo, @true_id, v_atom_is_bar, v_atom_is_bar)`.
|
|
||||||
This node effectively means: "If it's :foo, then true. Else (if not :foo), then it depends on whether it's :bar (from `v_atom_is_bar`). If :atom_is_foo is don't care, it also depends on :bar."
|
|
||||||
The reduction rule for `no_id == dc_id` isn't standard for BDDs but might make sense for TDDs if "not X" and "X is irrelevant" lead to the same sub-condition. CDuce's BDDs don't have this `dc` child directly in nodes, it's more implicit in their algorithm.
|
|
||||||
If we stick to standard ROBDD `apply` ideas slightly modified for three children, `dc` handling when variables differ is key.
|
|
||||||
|
|
||||||
**Key Data Structures to Manage:**
|
|
||||||
|
|
||||||
1. **Node Storage:** A list, array, or map to store the actual nodes. An ID is an index/key into this.
|
|
||||||
* `nodes = %{0 => {:terminal, :false}, 1 => {:terminal, :true}, 2 => {:node, var_id1, yes2, no2, dc2}, ...}`
|
|
||||||
2. **Unique Table:** `(variable_id, yes_id, no_id, dc_id) -> node_id`. This is for structural sharing and reduction rule 1.
|
|
||||||
* `unique_table = %{{var_id1, yes2, no2, dc2} => 2, ...}`
|
|
||||||
3. **Variable Definitions:** A way to map `variable_id` to its actual predicate logic and its position in the global order.
|
|
||||||
* `variables = %{var_id1 => %{name: :is_atom, order_idx: 0, predicate: &is_atom/1}, ...}`
|
|
||||||
4. **Computed Tables (Memoization for Operations):**
|
|
||||||
* For binary ops: `{(op_name, id1, id2) -> result_id}`
|
|
||||||
* For unary ops: `{(op_name, id1) -> result_id}`
|
|
||||||
|
|
||||||
**Summary of Refinements:**
|
|
||||||
|
|
||||||
1. **Global Hierarchical Variable Ordering:** Discriminate base types first, then type-specific properties. All variables (predicates) need a unique ID and a place in this global order.
|
|
||||||
2. **`mknode` is Central:** It implements reduction rule 2 (all children same -> return child) and uses the `unique_table` (reduction rule 1).
|
|
||||||
3. **`apply` Algorithm:** This is the workhorse for `sum`, `intersect`, etc. Its logic for handling cases where variables differ across operands is where the TDD structure (and `dc` child) truly shows its behavior. The `dc` child of a node `N1=(v,y,n,d)` would be used in `apply(op, N1.dc, N2)` if `v < var(N2)` and the `dc` path for `v` is taken.
|
|
||||||
4. **`dc` Child Interpretation:** In the `apply` algorithm, if `var(T1) < var(T2)`, `T2` doesn't test `var(T1)`. So, `T2` is passed "as is" when considering `T1.yes`, `T1.no`, and `T1.dc`. The resulting node for `var(T1)` will have its children determined by these recursive calls. The `dc` child in a node structure `(v, Y, N, D)` represents the type if the predicate `v` is "skipped" or considered irrelevant for that path.
|
|
||||||
5. **State Management:** You'll need a robust way to manage the shared state of nodes, unique table, and computed tables (e.g., a GenServer or ETS tables).
|
|
||||||
|
|
||||||
This is a complex system. Start simple: implement the `mknode` with unique table for just atoms and booleans. Then add `negate`. Then `sum` or `intersect`. Gradually add more complex variable types. Good luck!
|
|
||||||
216
g2.md
216
g2.md
@ -1,216 +0,0 @@
|
|||||||
```
|
|
||||||
defmodule Tdd do
|
|
||||||
@moduledoc """
|
|
||||||
Ternary decision diagram, used for representing set-theoritic types, akin to cduce.
|
|
||||||
There are 2 types of nodes:
|
|
||||||
- terminal nodes (true, false)
|
|
||||||
- variable nodes
|
|
||||||
|
|
||||||
variable nodes consist of:
|
|
||||||
- the variable being tested
|
|
||||||
- yes: id of the node if the result of the test is true
|
|
||||||
- no: id of the node if the result of the test is false
|
|
||||||
- dc: id of the node if the result of the test is irrelevant for the current operation
|
|
||||||
|
|
||||||
the TDD needs to be ordered and reduced (ROBDD)
|
|
||||||
- 'ordered' if different variables appear in the same order on all paths from the root.
|
|
||||||
- 'reduced' if the following two rules have been applied to its graph:
|
|
||||||
- Merge any isomorphic subgraphs.
|
|
||||||
- Eliminate any node whose two children are isomorphic.
|
|
||||||
|
|
||||||
Working notes:
|
|
||||||
- structure of the ordered variables:
|
|
||||||
Im thinking of structuring all possible types inside 1 TDD, in contrast to cduce, which uses a `desrc` structure that contains several TDDs (one for each domain, like ints, atoms, functions, etc.), and descr is a union between them.
|
|
||||||
For this, I need to come up with a variable structure that'll be ordered.
|
|
||||||
My set types will need to represent types like: atoms, strings, ints, maps, tuples, functions, kinds, free polymorphic variables
|
|
||||||
Moreso, those types themselves consist of smaller subsets of types like:
|
|
||||||
- int < 10
|
|
||||||
- int in [1, 2, 3]
|
|
||||||
- string > "prefix_"
|
|
||||||
- atom == false
|
|
||||||
- atom == false or atom == true or atom == nil
|
|
||||||
- map == %{"id" => string} and %{string => any | nil}
|
|
||||||
- polymorphic types ???
|
|
||||||
- etc.
|
|
||||||
Dont know how to represent them and make them ordered.
|
|
||||||
- node cache:
|
|
||||||
I suspect ids of nodes (TDDs) after reduction. This way a comparison between 2 types is just a pointer (id) check in the node cache. But not yet sure.
|
|
||||||
- reduction rules: not sure how to approach them
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def node(elem, yes, no, dc = _dont_care) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def sum(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def intersect(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def negate(one, two) do
|
|
||||||
end
|
|
||||||
end
|
|
||||||
```
|
|
||||||
Please help me refine the theory for impementing set-theoritic types with this.
|
|
||||||
Lets focus on ordering of variables in the TDD
|
|
||||||
|
|
||||||
========================================================================================================================
|
|
||||||
|
|
||||||
Okay, this is a fascinating and ambitious goal! Building a single, ordered, reduced TDD for all Elixir/Erlang types is a significant challenge, especially with the rich structure of types like maps and functions. Let's break down the ordering of variables.
|
|
||||||
|
|
||||||
The core idea for ordering in a BDD/TDD is to have a **total global order** on all possible "questions" (predicates) you can ask about a value.
|
|
||||||
|
|
||||||
**High-Level Strategy for Ordering Variables**
|
|
||||||
|
|
||||||
1. **Primary Type Discriminators First:** The most fundamental question is "What basic kind of thing is this?"
|
|
||||||
2. **General Properties Next:** After knowing the primary type, ask about general properties (e.g., size, length).
|
|
||||||
3. **Specific Value/Content Properties:** Then, delve into specific values or structural content.
|
|
||||||
4. **Recursive Structure for Nested Types:** For types like tuples and maps, the predicates about their elements/values will effectively be recursive, but the ordering system must flatten this into a global order.
|
|
||||||
|
|
||||||
**Proposed Variable Ordering Scheme**
|
|
||||||
|
|
||||||
Let's define "variables" as unique identifiers for predicates. We need a way to sort these identifiers. A good way is to use tuples, where Elixir's natural tuple sorting provides the order.
|
|
||||||
|
|
||||||
**Category 0: Primary Type Discriminators**
|
|
||||||
These are the most fundamental. They will have the lowest sort order.
|
|
||||||
Order them alphabetically by the type name.
|
|
||||||
* `v_is_atom = {0, :is_atom}`
|
|
||||||
* `v_is_binary = {0, :is_binary}`
|
|
||||||
* `v_is_float = {0, :is_float}`
|
|
||||||
* `v_is_function = {0, :is_function}`
|
|
||||||
* `v_is_integer = {0, :is_integer}`
|
|
||||||
* `v_is_list = {0, :is_list}`
|
|
||||||
* `v_is_map = {0, :is_map}`
|
|
||||||
* `v_is_pid = {0, :is_pid}`
|
|
||||||
* `v_is_port = {0, :is_port}`
|
|
||||||
* `v_is_reference = {0, :is_reference}`
|
|
||||||
* `v_is_string = {0, :is_string}` (*Note: Elixir strings are UTF-8 binaries. You might treat them as a subtype of binary or a distinct primary type in your model. For simplicity here, let's assume distinct for now, or you'd have predicates like `{0, :is_binary_utf8}` after `{0, :is_binary}`*)
|
|
||||||
* `v_is_tuple = {0, :is_tuple}`
|
|
||||||
|
|
||||||
**Category 1: Atom-Specific Predicates**
|
|
||||||
If `is_atom` is true, what specific atom is it?
|
|
||||||
Order by the atom itself.
|
|
||||||
* `v_atom_eq_false = {1, :value, false}`
|
|
||||||
* `v_atom_eq_nil = {1, :value, nil}`
|
|
||||||
* `v_atom_eq_true = {1, :value, true}`
|
|
||||||
* `v_atom_eq_specific_A = {1, :value, :an_atom}` (e.g., `:an_atom` comes after `true`)
|
|
||||||
* ... (all known/relevant atoms in your system, ordered canonically)
|
|
||||||
|
|
||||||
**Category 2: Integer-Specific Predicates**
|
|
||||||
If `is_integer` is true:
|
|
||||||
You need a canonical way to represent integer conditions.
|
|
||||||
* Equality: `v_int_eq_N = {2, :eq, N}` (e.g., `{2, :eq, 0}`, `{2, :eq, 10}`)
|
|
||||||
* Order by N.
|
|
||||||
* Less than: `v_int_lt_N = {2, :lt, N}` (e.g., `{2, :lt, 0}`, `{2, :lt, 10}`)
|
|
||||||
* Order by N.
|
|
||||||
* Greater than: `v_int_gt_N = {2, :gt, N}` (e.g., `{2, :gt, 0}`, `{2, :gt, 10}`)
|
|
||||||
* Order by N.
|
|
||||||
* Set membership for finite sets: `v_int_in_SET = {2, :in, Enum.sort(SET)}` (e.g. `{2, :in, [1,2,3]}`)
|
|
||||||
* Order by the canonical (sorted list) representation of SET.
|
|
||||||
* *This gets complex. Often, BDDs for integers use bit-level tests, but for set-theoretic types, range/specific value tests are more natural.* You might limit this to a predefined, finite set of "interesting" integer predicates.
|
|
||||||
|
|
||||||
**Category 3: String-Specific Predicates**
|
|
||||||
If `is_string` is true:
|
|
||||||
* Equality: `v_string_eq_S = {3, :eq, S}` (e.g., `{3, :eq, "foo"}`)
|
|
||||||
* Order by S lexicographically.
|
|
||||||
* Length: `v_string_len_eq_L = {3, :len_eq, L}`
|
|
||||||
* Order by L.
|
|
||||||
* Prefix: `v_string_prefix_P = {3, :prefix, P}`
|
|
||||||
* Order by P lexicographically.
|
|
||||||
* (Suffix, regex match, etc., can be added with consistent ordering rules)
|
|
||||||
|
|
||||||
**Category 4: Tuple-Specific Predicates**
|
|
||||||
If `is_tuple` is true:
|
|
||||||
1. **Size first:**
|
|
||||||
* `v_tuple_size_eq_N = {4, :size, N}` (e.g., `{4, :size, 0}`, `{4, :size, 2}`)
|
|
||||||
* Order by N.
|
|
||||||
2. **Element types (recursive structure in variable identifier):**
|
|
||||||
For a tuple of a *given size*, we then check its elements. The predicate for an element will re-use the *entire variable ordering scheme* but scoped to that element.
|
|
||||||
* `v_tuple_elem_I_PRED = {4, :element, index_I, NESTED_PREDICATE_ID}`
|
|
||||||
* Order by `index_I` first.
|
|
||||||
* Then order by `NESTED_PREDICATE_ID` (which itself is one of these `{category, type, value}` tuples).
|
|
||||||
* Example: Is element 0 an atom? `v_el0_is_atom = {4, :element, 0, {0, :is_atom}}`
|
|
||||||
* Example: Is element 0 the atom `:foo`? `v_el0_is_foo = {4, :element, 0, {1, :value, :foo}}`
|
|
||||||
* Example: Is element 1 an integer? `v_el1_is_int = {4, :element, 1, {0, :is_integer}}`
|
|
||||||
This ensures that all questions about element 0 come before element 1, and for each element, the standard hierarchy of questions is asked.
|
|
||||||
|
|
||||||
**Category 5: Map-Specific Predicates**
|
|
||||||
If `is_map` is true: This is the most complex.
|
|
||||||
1. **Size (optional, but can be useful):**
|
|
||||||
* `v_map_size_eq_N = {5, :size, N}`
|
|
||||||
* Order by N.
|
|
||||||
2. **Key Presence:**
|
|
||||||
* `v_map_has_key_K = {5, :has_key, K}` (e.g., `{5, :has_key, "id"}`, `{5, :has_key, :name}`)
|
|
||||||
* Order by K (canonically, e.g., strings before atoms, then lexicographically/atom-order).
|
|
||||||
3. **Key Value Types (recursive structure):**
|
|
||||||
For a *given key K* that is present:
|
|
||||||
* `v_map_key_K_value_PRED = {5, :key_value, K, NESTED_PREDICATE_ID}`
|
|
||||||
* Order by K (canonically).
|
|
||||||
* Then order by `NESTED_PREDICATE_ID`.
|
|
||||||
* Example: Does map have key `:id` and is its value a string?
|
|
||||||
* First variable: `v_map_has_id = {5, :has_key, :id}`
|
|
||||||
* If yes, next variable: `v_map_id_val_is_str = {5, :key_value, :id, {0, :is_string}}`
|
|
||||||
4. **Predicates for "all other keys" / "pattern keys":**
|
|
||||||
This is needed for types like `%{String.t() => integer()}`.
|
|
||||||
* `v_map_pattern_key_PRED_value_PRED = {5, :pattern_key, KEY_TYPE_PREDICATE_ID, VALUE_TYPE_PREDICATE_ID}`
|
|
||||||
* Example: For `%{String.t() => integer()}`:
|
|
||||||
* Key type predicate: `{0, :is_string}`
|
|
||||||
* Value type predicate for such keys: `{0, :is_integer}`
|
|
||||||
* Variable ID: `{5, :pattern_key, {0, :is_string}, {0, :is_integer}}`
|
|
||||||
* These pattern key predicates should likely be ordered *after* specific key predicates. The exact sorting of `KEY_TYPE_PREDICATE_ID` needs careful thought (e.g. `(0, :is_atom)` before `(0, :is_string)`).
|
|
||||||
|
|
||||||
**Category 6: List-Specific Predicates**
|
|
||||||
If `is_list` is true:
|
|
||||||
1. **Is Empty:**
|
|
||||||
* `v_list_is_empty = {6, :is_empty}`
|
|
||||||
2. **Head/Tail Structure (if not empty, recursive):**
|
|
||||||
This mirrors how types like `nonempty_list(H, T)` are defined.
|
|
||||||
* `v_list_head_PRED = {6, :head, NESTED_PREDICATE_ID}`
|
|
||||||
* `v_list_tail_PRED = {6, :tail, NESTED_PREDICATE_ID}` (Note: `NESTED_PREDICATE_ID` for tail would again be list predicates like `{6, :is_empty}` or `{6, :head, ...}`)
|
|
||||||
* Example: Head is an atom: `{6, :head, {0, :is_atom}}`
|
|
||||||
* Example: Tail is an empty list: `{6, :tail, {6, :is_empty}}`
|
|
||||||
* All head predicates come before all tail predicates.
|
|
||||||
|
|
||||||
**Category 7: Function-Specific Predicates**
|
|
||||||
If `is_function` is true:
|
|
||||||
1. **Arity:**
|
|
||||||
* `v_fun_arity_eq_A = {7, :arity, A}`
|
|
||||||
* Order by A.
|
|
||||||
2. **Argument Types (very complex, may need simplification for TDDs):**
|
|
||||||
* `v_fun_arg_I_PRED = {7, :arg, index_I, NESTED_PREDICATE_ID}`
|
|
||||||
3. **Return Type (also complex):**
|
|
||||||
* `v_fun_return_PRED = {7, :return, NESTED_PREDICATE_ID}`
|
|
||||||
* Function types are often represented by separate structures or simplified in TDDs due to their higher-order nature. Full function type checking within this TDD variable scheme would be extremely elaborate.
|
|
||||||
|
|
||||||
**Binary, Float, Pid, Port, Reference Predicates:**
|
|
||||||
These would get their own categories (e.g., 8, 9, 10...).
|
|
||||||
* **Floats:** `{X, :is_float}` -> `{X, :eq, F}`, `{X, :lt, F}`, etc.
|
|
||||||
* **Binaries:** `{Y, :is_binary}` -> `{Y, :size, S}`, `{Y, :matches_pattern, Pat}` (e.g. `<<h :: size(8), _ :: binary>>`)
|
|
||||||
|
|
||||||
**Polymorphic Variables (`alpha`, `beta`, etc.)**
|
|
||||||
Polymorphic variables are part of the *type language*, not properties of concrete values. A TDD represents a set of *concrete values*.
|
|
||||||
* When you construct a TDD for a type like `list(alpha)`, where `alpha` is free, `alpha` essentially means `any`. So, for predicates concerning list elements, they would all go to their `dc` (don't care) branches, ultimately leading to `true`.
|
|
||||||
* If `alpha` is bound (e.g., in `(alpha -> alpha) where alpha = integer`), you first resolve `alpha` to `integer` and then build the TDD for `(integer -> integer)`.
|
|
||||||
* So, "free polymorphic variables" don't become TDD variables themselves. They influence which branches are taken during TDD construction for types containing them, often mapping to `any` or `dc` paths.
|
|
||||||
|
|
||||||
**"Kinds"**
|
|
||||||
The "kind" of a type (e.g., `Type.Atom`, `Type.Integer`) is meta-information. The TDD *represents* a type. The variables within the TDD are about properties of *values* that belong to that type. You wouldn't have a TDD variable `is_kind_atom?`. Instead, the TDD for the type `atom` would start with the `v_is_atom = {0, :is_atom}` variable.
|
|
||||||
|
|
||||||
**The `dc` (Don't Care) Branch**
|
|
||||||
The `dc` branch for a variable `v` in a node means that for the type being represented, the outcome of the test `v` is irrelevant to membership *given the path taken to reach this node*.
|
|
||||||
* For type `any`, all variables would effectively go to `dc`, all leading to the `true_terminal`.
|
|
||||||
* For type `integer`, the test `v_is_atom`:
|
|
||||||
* `yes` branch -> `false_terminal` (an atom is not an integer)
|
|
||||||
* `no` branch -> continues to test `v_is_integer`, etc.
|
|
||||||
* `dc` branch: This is the tricky part in TDDs for types. In some formulations, `dc(N)` is `yes(N) U no(N)`. If a type truly doesn't care about `v_is_atom` (e.g., `any`), then `yes`, `no`, and `dc` might all point to the same child representing `any` for subsequent tests.
|
|
||||||
* For `ROBDD` reduction with `dc`, if `yes_child == no_child == dc_child`, the node is redundant. If `yes_child == no_child`, the test `v` is locally irrelevant, and the node might be simplified (parent points to `yes_child`, and the `dc` path needs careful handling depending on the TDD algebra). Standard BDD reduction (if `yes_child == no_child`, eliminate node) assumes only two children. You'll need TDD-specific reduction rules.
|
|
||||||
|
|
||||||
**Summary of Ordering Logic:**
|
|
||||||
1. Use tuples as variable identifiers: ` {category_num, predicate_type_atom, value_or_nested_id}`.
|
|
||||||
2. Sort these tuples using Elixir's default tuple comparison. This establishes the global order.
|
|
||||||
3. `category_num` provides the coarsest level of ordering.
|
|
||||||
4. Within a category, `predicate_type_atom` provides the next level.
|
|
||||||
5. Finally, `value_or_nested_id` provides the finest level, potentially recursively applying the same ordering logic for nested structures.
|
|
||||||
|
|
||||||
This structured approach to variable identifiers is crucial for the "ordered" property of your ROBDD. When implementing `apply` operations (union, intersect, negate), you'll always compare the variable IDs of the current nodes from the two TDDs and operate on the one that comes first in the global order.
|
|
||||||
18
lib/til.ex
18
lib/til.ex
@ -1,18 +0,0 @@
|
|||||||
defmodule Til do
|
|
||||||
@moduledoc """
|
|
||||||
Documentation for `Til`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Hello world.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
iex> Til.hello()
|
|
||||||
:world
|
|
||||||
|
|
||||||
"""
|
|
||||||
def hello do
|
|
||||||
:world
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,246 +0,0 @@
|
|||||||
defmodule Til.AstUtils do
|
|
||||||
@moduledoc """
|
|
||||||
Utility functions for working with Til AST node maps.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Retrieves a node from the nodes_map by its ID.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
iex> nodes = %{1 => %{id: 1, name: "node1"}, 2 => %{id: 2, name: "node2"}}
|
|
||||||
iex> Til.AstUtils.get_node(nodes, 1)
|
|
||||||
%{id: 1, name: "node1"}
|
|
||||||
|
|
||||||
iex> nodes = %{1 => %{id: 1, name: "node1"}}
|
|
||||||
iex> Til.AstUtils.get_node(nodes, 3)
|
|
||||||
nil
|
|
||||||
"""
|
|
||||||
def get_node(nodes_map, node_id) when is_map(nodes_map) and is_integer(node_id) do
|
|
||||||
Map.get(nodes_map, node_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Retrieves the child nodes of a given parent node.
|
|
||||||
The parent can be specified by its ID or as a node map.
|
|
||||||
Assumes child IDs are stored in the parent's `:children` field.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
iex> node1 = %{id: 1, children: [2, 3]}
|
|
||||||
iex> node2 = %{id: 2, value: "child1"}
|
|
||||||
iex> node3 = %{id: 3, value: "child2"}
|
|
||||||
iex> nodes = %{1 => node1, 2 => node2, 3 => node3}
|
|
||||||
iex> Til.AstUtils.get_child_nodes(nodes, 1)
|
|
||||||
[%{id: 2, value: "child1"}, %{id: 3, value: "child2"}]
|
|
||||||
|
|
||||||
iex> Til.AstUtils.get_child_nodes(nodes, node1)
|
|
||||||
[%{id: 2, value: "child1"}, %{id: 3, value: "child2"}]
|
|
||||||
|
|
||||||
iex> node4 = %{id: 4} # No children field
|
|
||||||
iex> nodes_no_children = %{4 => node4}
|
|
||||||
iex> Til.AstUtils.get_child_nodes(nodes_no_children, 4)
|
|
||||||
[]
|
|
||||||
"""
|
|
||||||
def get_child_nodes(nodes_map, parent_node_id)
|
|
||||||
when is_map(nodes_map) and is_integer(parent_node_id) do
|
|
||||||
case get_node(nodes_map, parent_node_id) do
|
|
||||||
nil -> []
|
|
||||||
parent_node -> get_child_nodes_from_node(nodes_map, parent_node)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_child_nodes(nodes_map, parent_node) when is_map(nodes_map) and is_map(parent_node) do
|
|
||||||
get_child_nodes_from_node(nodes_map, parent_node)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp get_child_nodes_from_node(nodes_map, parent_node) do
|
|
||||||
parent_node
|
|
||||||
|> Map.get(:children, [])
|
|
||||||
|> Enum.map(&get_node(nodes_map, &1))
|
|
||||||
# Filter out if a child_id doesn't resolve to a node
|
|
||||||
|> Enum.reject(&is_nil(&1))
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Retrieves the parent node of a given child node.
|
|
||||||
The child can be specified by its ID or as a node map.
|
|
||||||
Assumes parent ID is stored in the child's `:parent_id` field.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
iex> parent = %{id: 1, name: "parent"}
|
|
||||||
iex> child = %{id: 2, parent_id: 1, name: "child"}
|
|
||||||
iex> nodes = %{1 => parent, 2 => child}
|
|
||||||
iex> Til.AstUtils.get_parent_node(nodes, 2)
|
|
||||||
%{id: 1, name: "parent"}
|
|
||||||
|
|
||||||
iex> Til.AstUtils.get_parent_node(nodes, child)
|
|
||||||
%{id: 1, name: "parent"}
|
|
||||||
|
|
||||||
iex> root_node = %{id: 3, parent_id: nil}
|
|
||||||
iex> nodes_with_root = %{3 => root_node}
|
|
||||||
iex> Til.AstUtils.get_parent_node(nodes_with_root, 3)
|
|
||||||
nil
|
|
||||||
"""
|
|
||||||
def get_parent_node(nodes_map, child_node_id)
|
|
||||||
when is_map(nodes_map) and is_integer(child_node_id) do
|
|
||||||
case get_node(nodes_map, child_node_id) do
|
|
||||||
nil -> nil
|
|
||||||
child_node -> get_parent_node_from_node(nodes_map, child_node)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_parent_node(nodes_map, child_node) when is_map(nodes_map) and is_map(child_node) do
|
|
||||||
get_parent_node_from_node(nodes_map, child_node)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp get_parent_node_from_node(nodes_map, child_node) do
|
|
||||||
case Map.get(child_node, :parent_id) do
|
|
||||||
nil -> nil
|
|
||||||
parent_id -> get_node(nodes_map, parent_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Generates a string representation of the AST for pretty printing.
|
|
||||||
This is a basic implementation and can be expanded for more detail.
|
|
||||||
"""
|
|
||||||
def pretty_print_ast(nodes_map) when is_map(nodes_map) do
|
|
||||||
all_node_ids = Map.keys(nodes_map)
|
|
||||||
|
|
||||||
root_nodes =
|
|
||||||
nodes_map
|
|
||||||
|> Map.values()
|
|
||||||
|> Enum.filter(fn node ->
|
|
||||||
parent_id = Map.get(node, :parent_id)
|
|
||||||
is_nil(parent_id) or not Enum.member?(all_node_ids, parent_id)
|
|
||||||
end)
|
|
||||||
|> Enum.sort_by(fn node ->
|
|
||||||
case Map.get(node, :location) do
|
|
||||||
[start_offset | _] when is_integer(start_offset) -> start_offset
|
|
||||||
# Fallback to id if location is not as expected or not present
|
|
||||||
_ -> node.id
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
Enum.map_join(root_nodes, "\n", fn root_node ->
|
|
||||||
do_pretty_print_node(nodes_map, root_node, 0)
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_pretty_print_node(nodes_map, node, indent_level) do
|
|
||||||
prefix = String.duplicate(" ", indent_level)
|
|
||||||
node_id = node.id
|
|
||||||
ast_type = Map.get(node, :ast_node_type, :unknown)
|
|
||||||
raw_string = Map.get(node, :raw_string, "") |> String.replace("\n", "\\n")
|
|
||||||
|
|
||||||
details =
|
|
||||||
case ast_type do
|
|
||||||
:literal_integer -> "value: #{Map.get(node, :value)}"
|
|
||||||
# Consider truncating
|
|
||||||
:literal_string -> "value: \"#{Map.get(node, :value)}\""
|
|
||||||
:symbol -> "name: #{Map.get(node, :name)}"
|
|
||||||
_ -> ""
|
|
||||||
end
|
|
||||||
|
|
||||||
error_info =
|
|
||||||
if parsing_error = Map.get(node, :parsing_error) do
|
|
||||||
" (ERROR: #{parsing_error})"
|
|
||||||
else
|
|
||||||
""
|
|
||||||
end
|
|
||||||
|
|
||||||
current_node_str =
|
|
||||||
"#{prefix}Node #{node_id} [#{ast_type}] raw: \"#{raw_string}\" #{details}#{error_info}"
|
|
||||||
|
|
||||||
children_str =
|
|
||||||
node
|
|
||||||
|> Map.get(:children, [])
|
|
||||||
|> Enum.map(fn child_id ->
|
|
||||||
case get_node(nodes_map, child_id) do
|
|
||||||
# Should not happen in valid AST
|
|
||||||
nil -> "#{prefix} Child ID #{child_id} (not found)"
|
|
||||||
child_node -> do_pretty_print_node(nodes_map, child_node, indent_level + 1)
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|> Enum.join("\n")
|
|
||||||
|
|
||||||
if String.trim(children_str) == "" do
|
|
||||||
current_node_str
|
|
||||||
else
|
|
||||||
current_node_str <> "\n" <> children_str
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Generates a nested data structure representation of the AST for debugging.
|
|
||||||
"""
|
|
||||||
def build_debug_ast_data(nodes_map) when is_map(nodes_map) do
|
|
||||||
all_node_ids = Map.keys(nodes_map)
|
|
||||||
|
|
||||||
root_nodes =
|
|
||||||
nodes_map
|
|
||||||
|> Map.values()
|
|
||||||
|> Enum.filter(fn node ->
|
|
||||||
parent_id = Map.get(node, :parent_id)
|
|
||||||
is_nil(parent_id) or not Enum.member?(all_node_ids, parent_id)
|
|
||||||
end)
|
|
||||||
|> Enum.sort_by(fn node ->
|
|
||||||
case Map.get(node, :location) do
|
|
||||||
[start_offset | _] when is_integer(start_offset) -> start_offset
|
|
||||||
_ -> node.id
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
Enum.map(root_nodes, fn root_node ->
|
|
||||||
do_build_debug_node_data(nodes_map, root_node)
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_build_debug_node_data(nodes_map, node) do
|
|
||||||
node_id = node.id
|
|
||||||
ast_type = Map.get(node, :ast_node_type, :unknown)
|
|
||||||
raw_string = Map.get(node, :raw_string, "")
|
|
||||||
|
|
||||||
details =
|
|
||||||
case ast_type do
|
|
||||||
:literal_integer -> %{value: Map.get(node, :value)}
|
|
||||||
:literal_string -> %{value: Map.get(node, :value)}
|
|
||||||
:symbol -> %{name: Map.get(node, :name)}
|
|
||||||
_ -> %{}
|
|
||||||
end
|
|
||||||
|
|
||||||
error_info = Map.get(node, :parsing_error)
|
|
||||||
|
|
||||||
base_node_data = %{
|
|
||||||
id: node_id,
|
|
||||||
ast_node_type: ast_type,
|
|
||||||
raw_string: raw_string,
|
|
||||||
details: details
|
|
||||||
}
|
|
||||||
|
|
||||||
node_data_with_error =
|
|
||||||
if error_info do
|
|
||||||
Map.put(base_node_data, :parsing_error, error_info)
|
|
||||||
else
|
|
||||||
base_node_data
|
|
||||||
end
|
|
||||||
|
|
||||||
children_data =
|
|
||||||
node
|
|
||||||
|> Map.get(:children, [])
|
|
||||||
|> Enum.map(fn child_id ->
|
|
||||||
case get_node(nodes_map, child_id) do
|
|
||||||
nil -> %{error: "Child ID #{child_id} (not found)"}
|
|
||||||
child_node -> do_build_debug_node_data(nodes_map, child_node)
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
if Enum.empty?(children_data) do
|
|
||||||
node_data_with_error
|
|
||||||
else
|
|
||||||
Map.put(node_data_with_error, :children, children_data)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,963 +0,0 @@
|
|||||||
defmodule Til.Parser do
|
|
||||||
@moduledoc """
|
|
||||||
Parser for the Tilly Lisp dialect.
|
|
||||||
It transforms source code into a collection of Node Maps.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Represents the current parsing position
|
|
||||||
defstruct offset: 0, line: 1, col: 1, file_name: "unknown", nodes: %{}
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Parses a source string into a map of AST nodes.
|
|
||||||
"""
|
|
||||||
def parse(source_string, file_name \\ "unknown") do
|
|
||||||
file_node_id = System.unique_integer([:monotonic, :positive])
|
|
||||||
|
|
||||||
# Initial location for the file node (starts at the beginning)
|
|
||||||
file_start_offset = 0
|
|
||||||
file_start_line = 1
|
|
||||||
file_start_col = 1
|
|
||||||
|
|
||||||
# End location and raw_string will be finalized after parsing all content
|
|
||||||
prelim_file_node = %{
|
|
||||||
id: file_node_id,
|
|
||||||
type_id: nil,
|
|
||||||
# File node is the root
|
|
||||||
parent_id: nil,
|
|
||||||
file: file_name,
|
|
||||||
# End TBD
|
|
||||||
location: [file_start_offset, file_start_line, file_start_col, 0, 0, 0],
|
|
||||||
# TBD
|
|
||||||
raw_string: "",
|
|
||||||
ast_node_type: :file,
|
|
||||||
# TBD
|
|
||||||
children: [],
|
|
||||||
parsing_error: nil
|
|
||||||
}
|
|
||||||
|
|
||||||
initial_state = %__MODULE__{
|
|
||||||
file_name: file_name,
|
|
||||||
# Add prelim file node
|
|
||||||
nodes: %{file_node_id => prelim_file_node},
|
|
||||||
# Initial state offset should be 0 for the file
|
|
||||||
offset: 0,
|
|
||||||
# Initial state line should be 1
|
|
||||||
line: 1,
|
|
||||||
# Initial state col should be 1
|
|
||||||
col: 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Pass original_source_string for raw_string extraction, and file_node_id as parent for top-level exprs
|
|
||||||
final_state_after_expressions =
|
|
||||||
parse_all_expressions(source_string, source_string, initial_state, file_node_id)
|
|
||||||
|
|
||||||
# Finalize the file node
|
|
||||||
# Calculate end position of the entire source string
|
|
||||||
{file_end_line, file_end_col} = calculate_new_line_col(source_string, 1, 1)
|
|
||||||
# Offset is 0-indexed, length is the count of characters, so end_offset is length.
|
|
||||||
file_end_offset = String.length(source_string)
|
|
||||||
|
|
||||||
# Collect children of the file node
|
|
||||||
file_children_ids =
|
|
||||||
final_state_after_expressions.nodes
|
|
||||||
|> Map.values()
|
|
||||||
|> Enum.filter(&(&1.parent_id == file_node_id))
|
|
||||||
# Sort by start offset to maintain order of appearance in the source
|
|
||||||
|> Enum.sort_by(fn node -> hd(node.location) end)
|
|
||||||
|> Enum.map(& &1.id)
|
|
||||||
|
|
||||||
updated_file_node =
|
|
||||||
final_state_after_expressions.nodes
|
|
||||||
|> Map.get(file_node_id)
|
|
||||||
|> Map.merge(%{
|
|
||||||
location: [
|
|
||||||
file_start_offset,
|
|
||||||
file_start_line,
|
|
||||||
file_start_col,
|
|
||||||
file_end_offset,
|
|
||||||
file_end_line,
|
|
||||||
file_end_col
|
|
||||||
],
|
|
||||||
# The entire source is the raw string of the file node
|
|
||||||
raw_string: source_string,
|
|
||||||
children: file_children_ids
|
|
||||||
})
|
|
||||||
|
|
||||||
final_nodes =
|
|
||||||
Map.put(final_state_after_expressions.nodes, file_node_id, updated_file_node)
|
|
||||||
|
|
||||||
{:ok, final_nodes}
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Main Parsing Logic ---
|
|
||||||
|
|
||||||
# original_source_string is the complete initial source, source_string is the current remainder
|
|
||||||
# parent_id_for_top_level_expressions is the ID of the node that top-level expressions should be parented to (e.g., the :file node)
|
|
||||||
defp parse_all_expressions(
|
|
||||||
original_source_string,
|
|
||||||
source_string,
|
|
||||||
state,
|
|
||||||
parent_id_for_top_level_expressions
|
|
||||||
) do
|
|
||||||
case skip_whitespace(source_string, state) do
|
|
||||||
{:eos, final_state} ->
|
|
||||||
final_state
|
|
||||||
|
|
||||||
{:ok, remaining_source, current_state} ->
|
|
||||||
if remaining_source == "" do
|
|
||||||
# All content parsed, nothing left after skipping whitespace.
|
|
||||||
# This is a successful termination of parsing for the current branch.
|
|
||||||
current_state
|
|
||||||
else
|
|
||||||
# There's actual content to parse.
|
|
||||||
case parse_datum(
|
|
||||||
original_source_string,
|
|
||||||
remaining_source,
|
|
||||||
current_state,
|
|
||||||
parent_id_for_top_level_expressions
|
|
||||||
) do
|
|
||||||
{:ok, _node_id, next_source, next_state} ->
|
|
||||||
parse_all_expressions(
|
|
||||||
original_source_string,
|
|
||||||
next_source,
|
|
||||||
next_state,
|
|
||||||
parent_id_for_top_level_expressions
|
|
||||||
)
|
|
||||||
|
|
||||||
{:error_node, _node_id, _reason, next_source, next_state} ->
|
|
||||||
# An error node was created by parse_datum.
|
|
||||||
# Input was consumed. Continue parsing from next_source.
|
|
||||||
parse_all_expressions(
|
|
||||||
original_source_string,
|
|
||||||
next_source,
|
|
||||||
next_state,
|
|
||||||
parent_id_for_top_level_expressions
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: This relies on parse_datum and its components (like create_error_node_and_advance)
|
|
||||||
# to always consume input if source_string is not empty. If parse_datum could return
|
|
||||||
# :error_node without consuming input on a non-empty string, an infinite loop is possible.
|
|
||||||
# Current implementation of create_error_node_and_advance consumes 1 char.
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Parses a single datum: an atom (integer, symbol) or a list.
|
|
||||||
defp parse_datum(original_source_string, source, state, parent_id) do
|
|
||||||
# Peek for multi-character tokens first
|
|
||||||
cond do
|
|
||||||
String.starts_with?(source, "m{") ->
|
|
||||||
# Returns {:ok | :error_node, ...}
|
|
||||||
parse_map_expression(original_source_string, source, state, parent_id)
|
|
||||||
|
|
||||||
# Fallback to single character dispatch
|
|
||||||
true ->
|
|
||||||
char = String.first(source)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
char == "(" ->
|
|
||||||
# Returns {:ok | :error_node, ...}
|
|
||||||
parse_s_expression(original_source_string, source, state, parent_id)
|
|
||||||
|
|
||||||
char == ")" ->
|
|
||||||
# Unexpected closing parenthesis, consume 1 char for the error token ')'
|
|
||||||
# Returns {:error_node, ...}
|
|
||||||
create_error_node_and_advance(source, state, parent_id, 1, "Unexpected ')'")
|
|
||||||
|
|
||||||
char == "[" ->
|
|
||||||
# Returns {:ok | :error_node, ...}
|
|
||||||
parse_list_expression(original_source_string, source, state, parent_id)
|
|
||||||
|
|
||||||
char == "]" ->
|
|
||||||
# Unexpected closing square bracket, consume 1 char for the error token ']'
|
|
||||||
# Returns {:error_node, ...}
|
|
||||||
create_error_node_and_advance(source, state, parent_id, 1, "Unexpected ']'")
|
|
||||||
|
|
||||||
# For tuples
|
|
||||||
char == "{" ->
|
|
||||||
# Returns {:ok | :error_node, ...}
|
|
||||||
parse_tuple_expression(original_source_string, source, state, parent_id)
|
|
||||||
|
|
||||||
char == "}" ->
|
|
||||||
# Unexpected closing curly brace
|
|
||||||
# Returns {:error_node, ...}
|
|
||||||
create_error_node_and_advance(source, state, parent_id, 1, "Unexpected '}'")
|
|
||||||
|
|
||||||
char == "'" ->
|
|
||||||
# Returns {:ok | :error_node, ...}
|
|
||||||
parse_string_datum(original_source_string, source, state, parent_id)
|
|
||||||
|
|
||||||
char == ":" ->
|
|
||||||
# If the first char is ':', try to parse as an atom like :foo
|
|
||||||
case parse_atom_datum(source, state, parent_id) do
|
|
||||||
{:ok, node_id, rest, new_state} ->
|
|
||||||
{:ok, node_id, rest, new_state}
|
|
||||||
{:error, :not_atom} ->
|
|
||||||
# Failed to parse as a specific atom (e.g. ":foo").
|
|
||||||
# It could be a symbol that starts with ':' (e.g. if we allow ":" as a symbol).
|
|
||||||
# Fallback to general symbol parsing. Integer parsing won't match if it starts with ':'.
|
|
||||||
case parse_symbol_datum(source, state, parent_id) do
|
|
||||||
{:ok, node_id, rest, new_state} ->
|
|
||||||
{:ok, node_id, rest, new_state}
|
|
||||||
{:error, :not_symbol} ->
|
|
||||||
# If it started with ':' but wasn't a valid atom and also not a valid symbol
|
|
||||||
create_error_node_and_advance(source, state, parent_id, 1, "Unknown token starting with ':'")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Default case for other characters
|
|
||||||
# Try parsing as an integer first
|
|
||||||
case parse_integer_datum(source, state, parent_id) do
|
|
||||||
{:ok, node_id, rest, new_state} ->
|
|
||||||
{:ok, node_id, rest, new_state}
|
|
||||||
{:error, :not_integer} ->
|
|
||||||
# Not an integer, try parsing as a symbol
|
|
||||||
case parse_symbol_datum(source, state, parent_id) do
|
|
||||||
{:ok, node_id, rest, new_state} ->
|
|
||||||
{:ok, node_id, rest, new_state}
|
|
||||||
{:error, :not_symbol} ->
|
|
||||||
# Not a symbol either. Consume 1 char for the unknown token.
|
|
||||||
create_error_node_and_advance(source, state, parent_id, 1, "Unknown token")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end # end inner cond
|
|
||||||
end # end outer cond
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Datum Parsing Helpers --- (parse_string_datum, process_string_content)
|
|
||||||
|
|
||||||
defp parse_string_datum(_original_source_string, source, state, parent_id) do
|
|
||||||
# state is before consuming "'"
|
|
||||||
initial_state_for_token = state
|
|
||||||
strip_indent = initial_state_for_token.col - 1
|
|
||||||
|
|
||||||
# Consume opening "'"
|
|
||||||
{opening_tick, source_after_opening_tick} = String.split_at(source, 1)
|
|
||||||
|
|
||||||
case :binary.match(source_after_opening_tick, "'") do
|
|
||||||
:nomatch ->
|
|
||||||
# Unclosed string
|
|
||||||
content_segment = source_after_opening_tick
|
|
||||||
raw_token = opening_tick <> content_segment
|
|
||||||
|
|
||||||
state_at_node_end = advance_pos(initial_state_for_token, raw_token)
|
|
||||||
|
|
||||||
location = [
|
|
||||||
initial_state_for_token.offset,
|
|
||||||
initial_state_for_token.line,
|
|
||||||
initial_state_for_token.col,
|
|
||||||
state_at_node_end.offset,
|
|
||||||
state_at_node_end.line,
|
|
||||||
state_at_node_end.col
|
|
||||||
]
|
|
||||||
|
|
||||||
processed_value = process_string_content(content_segment, strip_indent)
|
|
||||||
|
|
||||||
{node_id, state_with_error_node} =
|
|
||||||
add_node(
|
|
||||||
initial_state_for_token,
|
|
||||||
parent_id,
|
|
||||||
location,
|
|
||||||
raw_token,
|
|
||||||
:literal_string,
|
|
||||||
%{value: processed_value, parsing_error: "Unclosed string literal"}
|
|
||||||
)
|
|
||||||
|
|
||||||
final_state = %{
|
|
||||||
state_with_error_node
|
|
||||||
| offset: state_at_node_end.offset,
|
|
||||||
line: state_at_node_end.line,
|
|
||||||
col: state_at_node_end.col
|
|
||||||
}
|
|
||||||
|
|
||||||
{:error_node, node_id, "Unclosed string literal", "", final_state}
|
|
||||||
|
|
||||||
# _tick_length will be 1 for "`"
|
|
||||||
{idx_closing_tick_in_segment, _tick_length} ->
|
|
||||||
content_segment =
|
|
||||||
String.slice(source_after_opening_tick, 0, idx_closing_tick_in_segment)
|
|
||||||
|
|
||||||
closing_tick = "'"
|
|
||||||
raw_token = opening_tick <> content_segment <> closing_tick
|
|
||||||
|
|
||||||
rest_of_source =
|
|
||||||
String.slice(source_after_opening_tick, (idx_closing_tick_in_segment + 1)..-1)
|
|
||||||
|
|
||||||
state_at_node_end = advance_pos(initial_state_for_token, raw_token)
|
|
||||||
|
|
||||||
location = [
|
|
||||||
initial_state_for_token.offset,
|
|
||||||
initial_state_for_token.line,
|
|
||||||
initial_state_for_token.col,
|
|
||||||
state_at_node_end.offset,
|
|
||||||
state_at_node_end.line,
|
|
||||||
state_at_node_end.col
|
|
||||||
]
|
|
||||||
|
|
||||||
processed_value = process_string_content(content_segment, strip_indent)
|
|
||||||
|
|
||||||
{new_node_id, state_with_node} =
|
|
||||||
add_node(
|
|
||||||
initial_state_for_token,
|
|
||||||
parent_id,
|
|
||||||
location,
|
|
||||||
raw_token,
|
|
||||||
:literal_string,
|
|
||||||
%{value: processed_value}
|
|
||||||
)
|
|
||||||
|
|
||||||
final_state = %{
|
|
||||||
state_with_node
|
|
||||||
| offset: state_at_node_end.offset,
|
|
||||||
line: state_at_node_end.line,
|
|
||||||
col: state_at_node_end.col
|
|
||||||
}
|
|
||||||
|
|
||||||
{:ok, new_node_id, rest_of_source, final_state}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp process_string_content(content_str, strip_indent) when strip_indent >= 0 do
|
|
||||||
lines = String.split(content_str, "\n", trim: false)
|
|
||||||
# Will always exist, even for empty content_str -> ""
|
|
||||||
first_line = List.first(lines)
|
|
||||||
|
|
||||||
rest_lines =
|
|
||||||
if length(lines) > 1 do
|
|
||||||
List.delete_at(lines, 0)
|
|
||||||
else
|
|
||||||
[]
|
|
||||||
end
|
|
||||||
|
|
||||||
processed_rest_lines =
|
|
||||||
Enum.map(rest_lines, fn line ->
|
|
||||||
current_leading_spaces_count =
|
|
||||||
Regex.run(~r/^(\s*)/, line)
|
|
||||||
|> List.first()
|
|
||||||
|> String.length()
|
|
||||||
|
|
||||||
spaces_to_remove = min(current_leading_spaces_count, strip_indent)
|
|
||||||
String.slice(line, spaces_to_remove..-1)
|
|
||||||
end)
|
|
||||||
|
|
||||||
all_processed_lines = [first_line | processed_rest_lines]
|
|
||||||
Enum.join(all_processed_lines, "\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Datum Parsing Helpers --- (parse_string_datum, process_string_content)
|
|
||||||
|
|
||||||
# (parse_string_datum remains unchanged)
|
|
||||||
|
|
||||||
defp parse_atom_datum(source, state, parent_id) do
|
|
||||||
# Atom is a colon followed by one or more non-delimiter characters.
|
|
||||||
# Delimiters are whitespace, (, ), [, ], {, }.
|
|
||||||
# The colon itself is part of the atom's raw string.
|
|
||||||
# The `atom_name_part` is what comes after the colon.
|
|
||||||
case Regex.run(~r/^:([^\s\(\)\[\]\{\}]+)/, source) do
|
|
||||||
[raw_atom_str, atom_name_part] -> # raw_atom_str is like ":foo", atom_name_part is "foo"
|
|
||||||
# The regex [^...]+ ensures atom_name_part is not empty.
|
|
||||||
rest_after_atom = String.slice(source, String.length(raw_atom_str)..-1)
|
|
||||||
start_offset = state.offset
|
|
||||||
start_line = state.line
|
|
||||||
start_col = state.col
|
|
||||||
state_after_token = advance_pos(state, raw_atom_str)
|
|
||||||
end_offset = state_after_token.offset
|
|
||||||
end_line = state_after_token.line
|
|
||||||
end_col = state_after_token.col
|
|
||||||
location = [start_offset, start_line, start_col, end_offset, end_line, end_col]
|
|
||||||
|
|
||||||
# Convert the name part (e.g., "foo") to an Elixir atom (e.g., :foo)
|
|
||||||
atom_value = String.to_atom(atom_name_part)
|
|
||||||
|
|
||||||
{new_node_id, state_with_node} =
|
|
||||||
add_node(
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
location,
|
|
||||||
raw_atom_str,
|
|
||||||
:literal_atom,
|
|
||||||
%{value: atom_value}
|
|
||||||
)
|
|
||||||
|
|
||||||
final_state = %{
|
|
||||||
state_with_node
|
|
||||||
| offset: end_offset,
|
|
||||||
line: end_line,
|
|
||||||
col: end_col
|
|
||||||
}
|
|
||||||
{:ok, new_node_id, rest_after_atom, final_state}
|
|
||||||
|
|
||||||
_ -> # No match (nil or list that doesn't conform, e.g., just ":" or ": followed by space/delimiter")
|
|
||||||
{:error, :not_atom}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_integer_datum(source, state, parent_id) do
|
|
||||||
case Integer.parse(source) do
|
|
||||||
{int_val, rest_after_int} ->
|
|
||||||
raw_int =
|
|
||||||
String.slice(source, 0, String.length(source) - String.length(rest_after_int))
|
|
||||||
|
|
||||||
start_offset = state.offset
|
|
||||||
start_line = state.line
|
|
||||||
start_col = state.col
|
|
||||||
state_after_token = advance_pos(state, raw_int)
|
|
||||||
end_offset = state_after_token.offset
|
|
||||||
end_line = state_after_token.line
|
|
||||||
end_col = state_after_token.col
|
|
||||||
location = [start_offset, start_line, start_col, end_offset, end_line, end_col]
|
|
||||||
|
|
||||||
{new_node_id, state_with_node} =
|
|
||||||
add_node(state, parent_id, location, raw_int, :literal_integer, %{value: int_val})
|
|
||||||
|
|
||||||
# Update state to reflect consumed token
|
|
||||||
final_state = %{state_with_node | offset: end_offset, line: end_line, col: end_col}
|
|
||||||
{:ok, new_node_id, rest_after_int, final_state}
|
|
||||||
|
|
||||||
:error ->
|
|
||||||
# Indicates failure, source and state are unchanged by this attempt
|
|
||||||
{:error, :not_integer}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_symbol_datum(source, state, parent_id) do
|
|
||||||
# Regex excludes common delimiters. `m{` is handled before symbol parsing.
|
|
||||||
case Regex.run(~r/^([^\s\(\)\[\]\{\}]+)/, source) do
|
|
||||||
[raw_symbol | _] ->
|
|
||||||
rest_after_symbol = String.slice(source, String.length(raw_symbol)..-1)
|
|
||||||
start_offset = state.offset
|
|
||||||
start_line = state.line
|
|
||||||
start_col = state.col
|
|
||||||
state_after_token = advance_pos(state, raw_symbol)
|
|
||||||
end_offset = state_after_token.offset
|
|
||||||
end_line = state_after_token.line
|
|
||||||
end_col = state_after_token.col
|
|
||||||
location = [start_offset, start_line, start_col, end_offset, end_line, end_col]
|
|
||||||
|
|
||||||
{new_node_id, state_with_node} =
|
|
||||||
add_node(state, parent_id, location, raw_symbol, :symbol, %{name: raw_symbol})
|
|
||||||
|
|
||||||
# Update state to reflect consumed token
|
|
||||||
final_state = %{
|
|
||||||
state_with_node
|
|
||||||
| offset: end_offset,
|
|
||||||
line: end_line,
|
|
||||||
col: end_col
|
|
||||||
}
|
|
||||||
|
|
||||||
{:ok, new_node_id, rest_after_symbol, final_state}
|
|
||||||
|
|
||||||
nil ->
|
|
||||||
# Indicates failure, source and state are unchanged by this attempt
|
|
||||||
{:error, :not_symbol}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp create_error_node_and_advance(
|
|
||||||
source_for_token,
|
|
||||||
state_before_token,
|
|
||||||
parent_id,
|
|
||||||
num_chars_for_token,
|
|
||||||
error_message
|
|
||||||
) do
|
|
||||||
{raw_token, rest_of_source} = String.split_at(source_for_token, num_chars_for_token)
|
|
||||||
|
|
||||||
start_offset = state_before_token.offset
|
|
||||||
start_line = state_before_token.line
|
|
||||||
start_col = state_before_token.col
|
|
||||||
|
|
||||||
state_after_token_consumed = advance_pos(state_before_token, raw_token)
|
|
||||||
end_offset = state_after_token_consumed.offset
|
|
||||||
end_line = state_after_token_consumed.line
|
|
||||||
end_col = state_after_token_consumed.col
|
|
||||||
location = [start_offset, start_line, start_col, end_offset, end_line, end_col]
|
|
||||||
|
|
||||||
{error_node_id, state_with_error_node} =
|
|
||||||
add_node(state_before_token, parent_id, location, raw_token, :unknown, %{
|
|
||||||
parsing_error: error_message
|
|
||||||
})
|
|
||||||
|
|
||||||
# The state for further parsing must reflect the consumed token's position and include the new error node
|
|
||||||
final_error_state = %{
|
|
||||||
state_with_error_node
|
|
||||||
| offset: end_offset,
|
|
||||||
line: end_line,
|
|
||||||
col: end_col
|
|
||||||
}
|
|
||||||
|
|
||||||
{:error_node, error_node_id, error_message, rest_of_source, final_error_state}
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_s_expression(original_source_string, source, state, parent_id) do
|
|
||||||
# Standard S-expression parsing via parse_collection
|
|
||||||
result = parse_collection(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
"(",
|
|
||||||
")",
|
|
||||||
:s_expression,
|
|
||||||
"Unclosed S-expression",
|
|
||||||
"Error parsing element in S-expression. Content might be incomplete."
|
|
||||||
)
|
|
||||||
|
|
||||||
# After parsing, check if it's an 'fn' expression
|
|
||||||
case result do
|
|
||||||
{:ok, collection_node_id, rest_after_collection, state_after_collection} ->
|
|
||||||
collection_node = Map.get(state_after_collection.nodes, collection_node_id)
|
|
||||||
|
|
||||||
if is_fn_expression?(collection_node, state_after_collection.nodes) do
|
|
||||||
transformed_node =
|
|
||||||
transform_to_lambda_expression(collection_node, state_after_collection.nodes)
|
|
||||||
|
|
||||||
final_state = %{
|
|
||||||
state_after_collection
|
|
||||||
| nodes:
|
|
||||||
Map.put(state_after_collection.nodes, transformed_node.id, transformed_node)
|
|
||||||
}
|
|
||||||
|
|
||||||
{:ok, transformed_node.id, rest_after_collection, final_state}
|
|
||||||
else
|
|
||||||
# Not an fn expression, return as is
|
|
||||||
result
|
|
||||||
end
|
|
||||||
|
|
||||||
_error_or_other ->
|
|
||||||
# Propagate errors or other results from parse_collection
|
|
||||||
result
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper to check if an S-expression node is an 'fn' expression
|
|
||||||
defp is_fn_expression?(s_expr_node, nodes_map) do
|
|
||||||
if s_expr_node.ast_node_type == :s_expression && !Enum.empty?(s_expr_node.children) do
|
|
||||||
first_child_id = hd(s_expr_node.children)
|
|
||||||
first_child_node = Map.get(nodes_map, first_child_id)
|
|
||||||
|
|
||||||
first_child_node && first_child_node.ast_node_type == :symbol &&
|
|
||||||
first_child_node.name == "fn"
|
|
||||||
else
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper to transform a generic S-expression node (known to be an 'fn' form)
|
|
||||||
# into a :lambda_expression node.
|
|
||||||
defp transform_to_lambda_expression(s_expr_node, nodes_map) do
|
|
||||||
# s_expr_node.children = [fn_symbol_id, params_s_expr_id, body_form1_id, ...]
|
|
||||||
_fn_symbol_id = Enum.at(s_expr_node.children, 0) # Already checked
|
|
||||||
|
|
||||||
if length(s_expr_node.children) < 2 do
|
|
||||||
%{s_expr_node | parsing_error: "Malformed 'fn' expression: missing parameters list."}
|
|
||||||
else
|
|
||||||
params_s_expr_id = Enum.at(s_expr_node.children, 1)
|
|
||||||
params_s_expr_node = Map.get(nodes_map, params_s_expr_id)
|
|
||||||
|
|
||||||
if !(params_s_expr_node && params_s_expr_node.ast_node_type == :s_expression) do
|
|
||||||
Map.put(s_expr_node, :parsing_error, "Malformed 'fn' expression: parameters list is not an S-expression.")
|
|
||||||
else
|
|
||||||
# Children of the parameters S-expression, e.g. for (fn ((a integer) (b atom) atom) ...),
|
|
||||||
# param_s_expr_children_ids would be IDs of [(a integer), (b atom), atom]
|
|
||||||
all_param_children_ids = Map.get(params_s_expr_node, :children, [])
|
|
||||||
|
|
||||||
{arg_spec_node_ids, return_type_spec_node_id} =
|
|
||||||
if Enum.empty?(all_param_children_ids) do
|
|
||||||
# Case: (fn () body) -> No args, nil (inferred) return type spec
|
|
||||||
{[], nil}
|
|
||||||
else
|
|
||||||
# Case: (fn (arg1 type1 ... ret_type) body)
|
|
||||||
# Last element is return type spec, rest are arg specs.
|
|
||||||
args = Enum.take(all_param_children_ids, length(all_param_children_ids) - 1)
|
|
||||||
ret_type_id = List.last(all_param_children_ids)
|
|
||||||
{args, ret_type_id}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Validate arg_spec_node_ids: each must be a symbol or an S-expr (param_symbol type_spec)
|
|
||||||
all_arg_specs_valid =
|
|
||||||
Enum.all?(arg_spec_node_ids, fn arg_id ->
|
|
||||||
arg_node = Map.get(nodes_map, arg_id)
|
|
||||||
case arg_node do
|
|
||||||
%{ast_node_type: :symbol} -> true # e.g. x
|
|
||||||
%{ast_node_type: :s_expression, children: s_children} -> # e.g. (x integer)
|
|
||||||
if length(s_children) == 2 do
|
|
||||||
param_sym_node = Map.get(nodes_map, hd(s_children))
|
|
||||||
type_spec_node = Map.get(nodes_map, hd(tl(s_children)))
|
|
||||||
|
|
||||||
param_sym_node && param_sym_node.ast_node_type == :symbol &&
|
|
||||||
type_spec_node && (type_spec_node.ast_node_type == :symbol || type_spec_node.ast_node_type == :s_expression)
|
|
||||||
else
|
|
||||||
false # Not a valid (param_symbol type_spec) structure
|
|
||||||
end
|
|
||||||
_ -> false # Not a symbol or valid S-expression for arg spec
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Validate return_type_spec_node_id: must be nil or a valid type specifier node
|
|
||||||
return_type_spec_valid =
|
|
||||||
if is_nil(return_type_spec_node_id) do
|
|
||||||
true # Inferred return type is valid
|
|
||||||
else
|
|
||||||
ret_type_node = Map.get(nodes_map, return_type_spec_node_id)
|
|
||||||
ret_type_node && (ret_type_node.ast_node_type == :symbol || ret_type_node.ast_node_type == :s_expression)
|
|
||||||
end
|
|
||||||
|
|
||||||
if all_arg_specs_valid && return_type_spec_valid do
|
|
||||||
body_node_ids = Enum.drop(s_expr_node.children, 2) # Body starts after 'fn' and params_s_expr
|
|
||||||
Map.merge(s_expr_node, %{
|
|
||||||
:ast_node_type => :lambda_expression,
|
|
||||||
:params_s_expr_id => params_s_expr_id,
|
|
||||||
:arg_spec_node_ids => arg_spec_node_ids,
|
|
||||||
:return_type_spec_node_id => return_type_spec_node_id,
|
|
||||||
:body_node_ids => body_node_ids
|
|
||||||
})
|
|
||||||
else
|
|
||||||
# Determine more specific error message
|
|
||||||
error_message =
|
|
||||||
cond do
|
|
||||||
!all_arg_specs_valid -> "Malformed 'fn' expression: invalid argument specification(s)."
|
|
||||||
!return_type_spec_valid -> "Malformed 'fn' expression: invalid return type specification."
|
|
||||||
true -> "Malformed 'fn' expression." # Generic fallback
|
|
||||||
end
|
|
||||||
Map.put(s_expr_node, :parsing_error, error_message)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_list_expression(original_source_string, source, state, parent_id) do
|
|
||||||
parse_collection(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
"[",
|
|
||||||
"]",
|
|
||||||
:list_expression,
|
|
||||||
"Unclosed list",
|
|
||||||
"Error parsing element in list. Content might be incomplete."
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_map_expression(original_source_string, source, state, parent_id) do
|
|
||||||
parse_collection(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
# Opening token
|
|
||||||
"m{",
|
|
||||||
# Closing token
|
|
||||||
"}",
|
|
||||||
:map_expression,
|
|
||||||
"Unclosed map",
|
|
||||||
"Error parsing element in map. Content might be incomplete."
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_tuple_expression(original_source_string, source, state, parent_id) do
|
|
||||||
parse_collection(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
"{",
|
|
||||||
"}",
|
|
||||||
:tuple_expression,
|
|
||||||
"Unclosed tuple",
|
|
||||||
"Error parsing element in tuple. Content might be incomplete."
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp parse_collection(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
parent_id,
|
|
||||||
open_char_str,
|
|
||||||
# Used by parse_collection_elements
|
|
||||||
close_char_str,
|
|
||||||
ast_node_type,
|
|
||||||
# Used by parse_collection_elements
|
|
||||||
unclosed_error_msg,
|
|
||||||
# Used by parse_collection_elements
|
|
||||||
element_error_msg
|
|
||||||
) do
|
|
||||||
# Consume opening token (e.g. '(', '[', 'm{')
|
|
||||||
collection_start_offset = state.offset
|
|
||||||
collection_start_line = state.line
|
|
||||||
collection_start_col = state.col
|
|
||||||
open_char_len = String.length(open_char_str)
|
|
||||||
{_opening_token, rest_after_opening_token} = String.split_at(source, open_char_len)
|
|
||||||
current_state = advance_pos(state, open_char_str)
|
|
||||||
|
|
||||||
collection_node_id = System.unique_integer([:monotonic, :positive])
|
|
||||||
|
|
||||||
prelim_collection_node = %{
|
|
||||||
id: collection_node_id,
|
|
||||||
type_id: nil,
|
|
||||||
parent_id: parent_id,
|
|
||||||
file: current_state.file_name,
|
|
||||||
# End TBD
|
|
||||||
location: [collection_start_offset, collection_start_line, collection_start_col, 0, 0, 0],
|
|
||||||
# TBD
|
|
||||||
raw_string: "",
|
|
||||||
ast_node_type: ast_node_type,
|
|
||||||
children: [],
|
|
||||||
parsing_error: nil
|
|
||||||
}
|
|
||||||
|
|
||||||
current_state_with_prelim_node = %{
|
|
||||||
current_state
|
|
||||||
| nodes: Map.put(current_state.nodes, collection_node_id, prelim_collection_node)
|
|
||||||
}
|
|
||||||
|
|
||||||
collection_start_pos_for_children =
|
|
||||||
{collection_start_offset, collection_start_line, collection_start_col}
|
|
||||||
|
|
||||||
# Pass all necessary params to the generalized element parser
|
|
||||||
result =
|
|
||||||
parse_collection_elements(
|
|
||||||
original_source_string,
|
|
||||||
rest_after_opening_token,
|
|
||||||
current_state_with_prelim_node,
|
|
||||||
collection_node_id,
|
|
||||||
[],
|
|
||||||
collection_start_pos_for_children,
|
|
||||||
# Parameters for generalization, passed from parse_collection's arguments:
|
|
||||||
# Used by parse_collection_elements
|
|
||||||
close_char_str,
|
|
||||||
# Used by parse_collection_elements
|
|
||||||
unclosed_error_msg,
|
|
||||||
# Passed to parse_collection_elements (might be unused there now)
|
|
||||||
element_error_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
# Adapt result to {:ok, node_id, ...} or {:error_node, node_id, ...}
|
|
||||||
case result do
|
|
||||||
{:ok, returned_collection_node_id, rest, state_after_elements} ->
|
|
||||||
{:ok, returned_collection_node_id, rest, state_after_elements}
|
|
||||||
|
|
||||||
{:error, reason, rest, state_after_elements} ->
|
|
||||||
# The collection_node_id is the ID of the node that has the error.
|
|
||||||
# This 'reason' is typically for unclosed collections or fatal element errors.
|
|
||||||
{:error_node, collection_node_id, reason, rest, state_after_elements}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Generalized from parse_s_expression_elements
|
|
||||||
defp parse_collection_elements(
|
|
||||||
original_source_string,
|
|
||||||
source,
|
|
||||||
state,
|
|
||||||
collection_node_id,
|
|
||||||
children_ids_acc,
|
|
||||||
collection_start_pos_tuple,
|
|
||||||
# New parameters for generalization:
|
|
||||||
# e.g., ")" or "]"
|
|
||||||
closing_char_str,
|
|
||||||
# e.g., "Unclosed S-expression"
|
|
||||||
unclosed_error_message,
|
|
||||||
# e.g., "Error parsing element in S-expression..."
|
|
||||||
# Now potentially unused, marked with underscore
|
|
||||||
element_error_message
|
|
||||||
) do
|
|
||||||
case skip_whitespace(source, state) do
|
|
||||||
{:eos, current_state_at_eos} ->
|
|
||||||
# Unclosed collection
|
|
||||||
collection_node = Map.get(current_state_at_eos.nodes, collection_node_id)
|
|
||||||
start_offset = elem(collection_start_pos_tuple, 0)
|
|
||||||
end_offset = current_state_at_eos.offset
|
|
||||||
|
|
||||||
actual_raw_string =
|
|
||||||
String.slice(original_source_string, start_offset, end_offset - start_offset)
|
|
||||||
|
|
||||||
updated_collection_node = %{
|
|
||||||
collection_node
|
|
||||||
| # Use generalized message
|
|
||||||
parsing_error: unclosed_error_message,
|
|
||||||
children: Enum.reverse(children_ids_acc),
|
|
||||||
location: [
|
|
||||||
start_offset,
|
|
||||||
elem(collection_start_pos_tuple, 1),
|
|
||||||
elem(collection_start_pos_tuple, 2),
|
|
||||||
end_offset,
|
|
||||||
current_state_at_eos.line,
|
|
||||||
current_state_at_eos.col
|
|
||||||
],
|
|
||||||
raw_string: actual_raw_string
|
|
||||||
}
|
|
||||||
|
|
||||||
final_state = %{
|
|
||||||
current_state_at_eos
|
|
||||||
| nodes:
|
|
||||||
Map.put(current_state_at_eos.nodes, collection_node_id, updated_collection_node)
|
|
||||||
}
|
|
||||||
|
|
||||||
# This error is for the collection itself being unclosed.
|
|
||||||
# The collection_node_id is implicitly the ID of this error node.
|
|
||||||
{:error, unclosed_error_message, "", final_state}
|
|
||||||
|
|
||||||
{:ok, remaining_source, current_state} ->
|
|
||||||
# Check if the remaining source starts with the closing token string
|
|
||||||
if String.starts_with?(remaining_source, closing_char_str) do
|
|
||||||
# End of collection
|
|
||||||
closing_char_len = String.length(closing_char_str)
|
|
||||||
|
|
||||||
{_closing_token, rest_after_closing_token} =
|
|
||||||
String.split_at(remaining_source, closing_char_len)
|
|
||||||
|
|
||||||
final_collection_state = advance_pos(current_state, closing_char_str)
|
|
||||||
collection_node = Map.get(final_collection_state.nodes, collection_node_id)
|
|
||||||
|
|
||||||
coll_final_start_offset = elem(collection_start_pos_tuple, 0)
|
|
||||||
coll_final_start_line = elem(collection_start_pos_tuple, 1)
|
|
||||||
coll_final_start_col = elem(collection_start_pos_tuple, 2)
|
|
||||||
coll_final_end_offset = final_collection_state.offset
|
|
||||||
coll_final_end_line = final_collection_state.line
|
|
||||||
coll_final_end_col = final_collection_state.col
|
|
||||||
|
|
||||||
actual_raw_string =
|
|
||||||
String.slice(
|
|
||||||
original_source_string,
|
|
||||||
coll_final_start_offset,
|
|
||||||
coll_final_end_offset - coll_final_start_offset
|
|
||||||
)
|
|
||||||
|
|
||||||
updated_collection_node = %{
|
|
||||||
collection_node
|
|
||||||
| children: Enum.reverse(children_ids_acc),
|
|
||||||
location: [
|
|
||||||
coll_final_start_offset,
|
|
||||||
coll_final_start_line,
|
|
||||||
coll_final_start_col,
|
|
||||||
coll_final_end_offset,
|
|
||||||
coll_final_end_line,
|
|
||||||
coll_final_end_col
|
|
||||||
],
|
|
||||||
raw_string: actual_raw_string
|
|
||||||
}
|
|
||||||
|
|
||||||
final_state_with_collection = %{
|
|
||||||
final_collection_state
|
|
||||||
| nodes:
|
|
||||||
Map.put(
|
|
||||||
final_collection_state.nodes,
|
|
||||||
collection_node_id,
|
|
||||||
updated_collection_node
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
{:ok, collection_node_id, rest_after_closing_token, final_state_with_collection}
|
|
||||||
else
|
|
||||||
# Parse an element
|
|
||||||
case parse_datum(
|
|
||||||
original_source_string,
|
|
||||||
remaining_source,
|
|
||||||
current_state,
|
|
||||||
# parent_id for the element
|
|
||||||
collection_node_id
|
|
||||||
) do
|
|
||||||
{:ok, child_node_id, next_source_after_elem, next_state_after_elem} ->
|
|
||||||
parse_collection_elements(
|
|
||||||
original_source_string,
|
|
||||||
next_source_after_elem,
|
|
||||||
next_state_after_elem,
|
|
||||||
collection_node_id,
|
|
||||||
# Add successful child's ID
|
|
||||||
[child_node_id | children_ids_acc],
|
|
||||||
collection_start_pos_tuple,
|
|
||||||
closing_char_str,
|
|
||||||
unclosed_error_message,
|
|
||||||
# Pass through, though may be unused
|
|
||||||
element_error_message
|
|
||||||
)
|
|
||||||
|
|
||||||
{:error_node, child_error_node_id, _child_reason, next_source_after_elem,
|
|
||||||
next_state_after_elem} ->
|
|
||||||
# An error node was created for the child element. Add its ID and continue.
|
|
||||||
parse_collection_elements(
|
|
||||||
original_source_string,
|
|
||||||
next_source_after_elem,
|
|
||||||
next_state_after_elem,
|
|
||||||
collection_node_id,
|
|
||||||
# Add error child's ID
|
|
||||||
[child_error_node_id | children_ids_acc],
|
|
||||||
collection_start_pos_tuple,
|
|
||||||
closing_char_str,
|
|
||||||
unclosed_error_message,
|
|
||||||
# Pass through
|
|
||||||
element_error_message
|
|
||||||
)
|
|
||||||
|
|
||||||
# No other return types are expected from parse_datum if it always creates a node on error
|
|
||||||
# or succeeds. If parse_datum could fail without creating a node and without consuming input,
|
|
||||||
# that would be an issue here, potentially leading to infinite loops if not handled.
|
|
||||||
# The current changes aim for parse_datum to always return :ok or :error_node.
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Utility Functions ---
|
|
||||||
|
|
||||||
# Note: The `extra_fields` argument was changed from optional to required
|
|
||||||
# as the default value was never used according to compiler warnings.
|
|
||||||
defp add_node(state, parent_id, location, raw_string, ast_node_type, extra_fields) do
|
|
||||||
node_id = System.unique_integer([:monotonic, :positive])
|
|
||||||
|
|
||||||
node =
|
|
||||||
%{
|
|
||||||
id: node_id,
|
|
||||||
type_id: nil,
|
|
||||||
parent_id: parent_id,
|
|
||||||
file: state.file_name,
|
|
||||||
# [start_offset, start_line, start_col, end_offset, end_line, end_col]
|
|
||||||
location: location,
|
|
||||||
raw_string: raw_string,
|
|
||||||
ast_node_type: ast_node_type
|
|
||||||
}
|
|
||||||
|> Map.merge(extra_fields)
|
|
||||||
|
|
||||||
{node_id, %{state | nodes: Map.put(state.nodes, node_id, node)}}
|
|
||||||
end
|
|
||||||
|
|
||||||
defp skip_whitespace(source, state = %__MODULE__{offset: o, line: l, col: c}) do
|
|
||||||
whitespace_match = Regex.run(~r/^\s+/, source)
|
|
||||||
|
|
||||||
if whitespace_match do
|
|
||||||
[ws | _] = whitespace_match
|
|
||||||
new_offset = o + String.length(ws)
|
|
||||||
{new_line, new_col} = calculate_new_line_col(ws, l, c)
|
|
||||||
remaining_source = String.slice(source, String.length(ws)..-1)
|
|
||||||
{:ok, remaining_source, %{state | offset: new_offset, line: new_line, col: new_col}}
|
|
||||||
else
|
|
||||||
if String.length(source) == 0 do
|
|
||||||
{:eos, state}
|
|
||||||
else
|
|
||||||
# No leading whitespace
|
|
||||||
{:ok, source, state}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp calculate_new_line_col(string_segment, start_line, start_col) do
|
|
||||||
string_segment
|
|
||||||
|> String.codepoints()
|
|
||||||
|> Enum.reduce({start_line, start_col}, fn char, {line, col} ->
|
|
||||||
if char == "\n" do
|
|
||||||
{line + 1, 1}
|
|
||||||
else
|
|
||||||
{line, col + 1}
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp advance_pos(state = %__MODULE__{offset: o, line: l, col: c}, consumed_string) do
|
|
||||||
new_offset = o + String.length(consumed_string)
|
|
||||||
{new_line, new_col} = calculate_new_line_col(consumed_string, l, c)
|
|
||||||
%{state | offset: new_offset, line: new_line, col: new_col}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
defmodule Tdd do
|
|
||||||
@moduledoc """
|
|
||||||
Ternary decision diagram, used for representing set-theoritic types, akin to cduce.
|
|
||||||
There are 2 types of nodes:
|
|
||||||
- terminal nodes (true, false)
|
|
||||||
- variable nodes
|
|
||||||
|
|
||||||
variable nodes consist of:
|
|
||||||
- the variable being tested
|
|
||||||
- yes: id of the node if the result of the test is true
|
|
||||||
- no: id of the node if the result of the test is false
|
|
||||||
- dc: id of the node if the result of the test is irrelevant for the current operation
|
|
||||||
|
|
||||||
the TDD needs to be ordered and reduced (ROBDD)
|
|
||||||
- 'ordered' if different variables appear in the same order on all paths from the root.
|
|
||||||
- 'reduced' if the following two rules have been applied to its graph:
|
|
||||||
- Merge any isomorphic subgraphs.
|
|
||||||
- Eliminate any node whose two children are isomorphic.
|
|
||||||
|
|
||||||
Working notes:
|
|
||||||
- structure of the ordered variables:
|
|
||||||
Im thinking of structuring all possible types inside 1 TDD, in contrast to cduce, which uses a `desrc` structure that contains several TDDs (one for each domain, like ints, atoms, functions, etc.), and descr is a union between them.
|
|
||||||
For this, I need to come up with a variable structure that'll be ordered.
|
|
||||||
My set types will need to represent types like: atoms, strings, ints, maps, tuples, functions, kinds?
|
|
||||||
Moreso, those types themselves consist of smaller subsets of types like:
|
|
||||||
- int < 10
|
|
||||||
- int in [1, 2, 3]
|
|
||||||
- string > "prefix_"
|
|
||||||
- atom == false
|
|
||||||
- atom == false or atom == true or atom == nil
|
|
||||||
- map == %{"id" => string} and %{string => any | nil}
|
|
||||||
- etc.
|
|
||||||
Dont know how to represent them and make them ordered.
|
|
||||||
- node cache:
|
|
||||||
I don't yet know what it should contain, I suspect ids of nodes (TDDs) after reduction. This way a comparison between 2 types is just a pointer (id) check in the node cache. But not yet sure.
|
|
||||||
- reduction rules: not sure how to approach them
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def node(elem, yes, no, dc = _dont_care) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def sum(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def intersect(one, two) do
|
|
||||||
end
|
|
||||||
|
|
||||||
def negate(one, two) do
|
|
||||||
end
|
|
||||||
end
|
|
||||||
817
lib/til/type.ex
817
lib/til/type.ex
@ -1,817 +0,0 @@
|
|||||||
defmodule Tilly.X.Type do
|
|
||||||
@moduledoc """
|
|
||||||
Core type system definitions for Tilly — a Lisp that transpiles to Elixir,
|
|
||||||
using set-theoretic types represented as Ternary Decision Diagrams (TDDs).
|
|
||||||
|
|
||||||
Supports:
|
|
||||||
- Set-theoretic types (union, intersection, negation)
|
|
||||||
- Structural polymorphism with `forall`
|
|
||||||
- Type constraints (e.g., Enumerable(~a))
|
|
||||||
- Structural map types
|
|
||||||
"""
|
|
||||||
|
|
||||||
# === Monotype TDD Representation ===
|
|
||||||
|
|
||||||
defmodule TDD do
|
|
||||||
@moduledoc """
|
|
||||||
Represents a ternary decision diagram node for types.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defstruct [:decision, :yes, :no, :maybe]
|
|
||||||
|
|
||||||
@type t :: %__MODULE__{
|
|
||||||
decision: Tilly.Type.Decision.t(),
|
|
||||||
yes: TDD.t() | :any | :none,
|
|
||||||
no: TDD.t() | :any | :none,
|
|
||||||
maybe: TDD.t() | :any | :none
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Type Variable ===
|
|
||||||
|
|
||||||
defmodule Var do
|
|
||||||
@moduledoc """
|
|
||||||
Represents a type variable in a polymorphic type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defstruct [:name, constraints: []]
|
|
||||||
|
|
||||||
@type t :: %__MODULE__{
|
|
||||||
name: String.t(),
|
|
||||||
constraints: [Tilly.Type.Constraint.t()]
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Structural Map Type ===
|
|
||||||
|
|
||||||
defmodule TDDMap do
|
|
||||||
@moduledoc """
|
|
||||||
Structural representation of a map type, with per-key typing and optional openness.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defstruct fields: [], rest: nil
|
|
||||||
|
|
||||||
@type t :: %__MODULE__{
|
|
||||||
fields: [{TDD.t(), TDD.t()}],
|
|
||||||
rest: TDD.t() | nil
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if t1 is a subtype of t2 under the current substitution.
|
|
||||||
t1 <: t2 iff t1 & (not t2) == None
|
|
||||||
"""
|
|
||||||
def is_subtype(raw_t1, raw_t2, sub) do
|
|
||||||
# Use the apply_sub we defined/refined previously
|
|
||||||
t1 = tdd_substitute(raw_t1, sub)
|
|
||||||
t2 = tdd_substitute(raw_t2, sub)
|
|
||||||
|
|
||||||
# Handle edge cases with Any and None for robustness
|
|
||||||
cond do
|
|
||||||
# None is a subtype of everything
|
|
||||||
t1 == tdd_none() ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Everything is a subtype of Any
|
|
||||||
t2 == tdd_any() ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Any is not a subtype of a specific type (unless that type is also Any)
|
|
||||||
t1 == tdd_any() and t2 != tdd_any() ->
|
|
||||||
false
|
|
||||||
|
|
||||||
# A non-None type cannot be a subtype of None
|
|
||||||
t2 == tdd_none() and t1 != tdd_none() ->
|
|
||||||
false
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# The core set-theoretic check: t1 \ t2 == None
|
|
||||||
tdd_diff(t1, t2) == tdd_none()
|
|
||||||
|
|
||||||
# Alternatively: Type.tdd_and(t1, t2) == t1 (but this can be tricky with complex TDDs if not canonical)
|
|
||||||
# The difference check is generally more direct for subtyping.
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Type Decisions (Predicates) ===
|
|
||||||
|
|
||||||
defmodule Decision do
|
|
||||||
@moduledoc """
|
|
||||||
A type-level decision predicate used in a TDD node.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@type t ::
|
|
||||||
:is_atom
|
|
||||||
| :is_integer
|
|
||||||
| :is_float
|
|
||||||
| :is_binary
|
|
||||||
| :is_list
|
|
||||||
| :is_tuple
|
|
||||||
| :is_map
|
|
||||||
| :is_function
|
|
||||||
| :is_pid
|
|
||||||
| :is_reference
|
|
||||||
| {:literal, term()}
|
|
||||||
| {:tuple_len, pos_integer()}
|
|
||||||
| {:key, TDD.t()}
|
|
||||||
| {:has_struct_key, atom()}
|
|
||||||
| {:var, String.t()}
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Type Constraints (structural predicates) ===
|
|
||||||
|
|
||||||
defmodule Constraint do
|
|
||||||
@moduledoc """
|
|
||||||
Represents a structural constraint on a type variable,
|
|
||||||
similar to a typeclass in Haskell or trait in Rust, but structural.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defstruct [:kind, :arg]
|
|
||||||
|
|
||||||
@type kind ::
|
|
||||||
:enumerable
|
|
||||||
| :collectable
|
|
||||||
| :struct_with_keys
|
|
||||||
| :custom
|
|
||||||
|
|
||||||
@type t :: %__MODULE__{
|
|
||||||
kind: kind(),
|
|
||||||
arg: String.t() | TDD.t() | any()
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Polymorphic Types (forall + constraints) ===
|
|
||||||
|
|
||||||
defmodule PolyTDD do
|
|
||||||
@moduledoc """
|
|
||||||
Represents a polymorphic type with optional structural constraints.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defstruct [:vars, :body]
|
|
||||||
|
|
||||||
@type t :: %__MODULE__{
|
|
||||||
vars: [Var.t()],
|
|
||||||
body: TDD.t()
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Constants for base types ===
|
|
||||||
|
|
||||||
@doc "A TDD representing the universal type (any value)"
|
|
||||||
def tdd_any, do: :any
|
|
||||||
|
|
||||||
@doc "A TDD representing the empty type (no values)"
|
|
||||||
def tdd_none, do: :none
|
|
||||||
|
|
||||||
@doc "Creates a TDD for a literal value"
|
|
||||||
def tdd_literal(value) do
|
|
||||||
%TDD{
|
|
||||||
decision: {:literal, value},
|
|
||||||
yes: :any,
|
|
||||||
no: :none,
|
|
||||||
maybe: :none
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Creates a TDD for a base predicate (e.g., is_atom)"
|
|
||||||
def tdd_pred(pred) when is_atom(pred) do
|
|
||||||
%TDD{
|
|
||||||
decision: pred,
|
|
||||||
yes: :any,
|
|
||||||
no: :none,
|
|
||||||
maybe: :none
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Creates a TDD for a type variable reference"
|
|
||||||
def tdd_var(name) when is_binary(name) do
|
|
||||||
%TDD{
|
|
||||||
decision: {:var, name},
|
|
||||||
yes: :any,
|
|
||||||
no: :none,
|
|
||||||
maybe: :none
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Performs type variable substitution in a TDD,
|
|
||||||
replacing variables found in the given `env` map.
|
|
||||||
"""
|
|
||||||
def tdd_substitute(:any, _env), do: :any
|
|
||||||
def tdd_substitute(:none, _env), do: :none
|
|
||||||
|
|
||||||
def tdd_substitute(%TDD{decision: {:var, name}}, env) when is_map(env) do
|
|
||||||
Map.get(env, name, %TDD{decision: {:var, name}, yes: :any, no: :none, maybe: :none})
|
|
||||||
end
|
|
||||||
|
|
||||||
def tdd_substitute(%TDD{} = tdd, env) do
|
|
||||||
%TDD{
|
|
||||||
decision: tdd.decision,
|
|
||||||
yes: tdd_substitute(tdd.yes, env),
|
|
||||||
no: tdd_substitute(tdd.no, env),
|
|
||||||
maybe: tdd_substitute(tdd.maybe, env)
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Performs substitution in a polymorphic type, replacing all vars
|
|
||||||
in `vars` with given TDDs from `env`.
|
|
||||||
"""
|
|
||||||
def poly_substitute(%PolyTDD{vars: vars, body: body}, env) do
|
|
||||||
var_names = Enum.map(vars, & &1.name)
|
|
||||||
restricted_env = Map.take(env, var_names)
|
|
||||||
tdd_substitute(body, restricted_env)
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Constraints ===
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks whether a TDD satisfies a built-in structural constraint,
|
|
||||||
such as Enumerable or String.Chars.
|
|
||||||
"""
|
|
||||||
def satisfies_constraint?(tdd, %Constraint{kind: :enumerable}) do
|
|
||||||
tdd_is_of_kind?(tdd, [:list, :map, :bitstring])
|
|
||||||
end
|
|
||||||
|
|
||||||
def satisfies_constraint?(tdd, %Constraint{kind: :string_chars}) do
|
|
||||||
tdd_is_of_kind?(tdd, [:bitstring, :atom])
|
|
||||||
end
|
|
||||||
|
|
||||||
def satisfies_constraint?(_tdd, %Constraint{kind: :custom}) do
|
|
||||||
raise "Custom constraints not implemented yet"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Default fallback: constraint not recognized
|
|
||||||
def satisfies_constraint?(_tdd, %Constraint{kind: kind}) do
|
|
||||||
raise ArgumentError, "Unknown constraint kind: #{inspect(kind)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if a TDD is semantically a subtype of any of the specified kinds.
|
|
||||||
Used to approximate constraint satisfaction structurally.
|
|
||||||
"""
|
|
||||||
def tdd_is_of_kind?(:any, _), do: true
|
|
||||||
def tdd_is_of_kind?(:none, _), do: false
|
|
||||||
|
|
||||||
def tdd_is_of_kind?(%TDD{decision: pred} = tdd, kinds) do
|
|
||||||
if pred in kinds do
|
|
||||||
# Decision directly confirms kind
|
|
||||||
tdd.yes != :none
|
|
||||||
else
|
|
||||||
# Otherwise we conservatively say "no" unless the TDD is union-like
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# === Decision ===
|
|
||||||
defmodule Decision do
|
|
||||||
@moduledoc """
|
|
||||||
A type-level decision predicate used in a TDD node.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@type t ::
|
|
||||||
:is_atom
|
|
||||||
| :is_integer
|
|
||||||
| :is_float
|
|
||||||
| :is_binary
|
|
||||||
| :is_list
|
|
||||||
| :is_tuple
|
|
||||||
| :is_map
|
|
||||||
# General "is a function"
|
|
||||||
| :is_function
|
|
||||||
| :is_pid
|
|
||||||
| :is_reference
|
|
||||||
| {:literal, term()}
|
|
||||||
| {:tuple_len, pos_integer()}
|
|
||||||
# Type of a map key (used in structural map checks)
|
|
||||||
| {:key, TDD.t()}
|
|
||||||
| {:has_struct_key, atom()}
|
|
||||||
# A type variable name, e.g., "~a"
|
|
||||||
| {:var, String.t()}
|
|
||||||
# New
|
|
||||||
| {:is_function_sig, param_types :: [TDD.t()], return_type :: TDD.t()}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Creates a TDD for a specific function signature"
|
|
||||||
def tdd_function_sig(param_types, return_type)
|
|
||||||
when is_list(param_types) and (is_struct(return_type, TDD) or return_type in [:any, :none]) do
|
|
||||||
%TDD{
|
|
||||||
decision: {:is_function_sig, param_types, return_type},
|
|
||||||
# A value matches if it's a function of this signature
|
|
||||||
yes: :any,
|
|
||||||
no: :none,
|
|
||||||
# Maybe it's some other function
|
|
||||||
maybe: %TDD{decision: :is_function, yes: :any, no: :none, maybe: :none}
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# ... (existing tdd_or, tdd_and, tdd_not, tdd_diff) ...
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Performs type variable substitution in a TDD,
|
|
||||||
replacing variables found in the given `env` map (var_name -> TDD).
|
|
||||||
"""
|
|
||||||
def tdd_substitute(:any, _env), do: :any
|
|
||||||
def tdd_substitute(:none, _env), do: :none
|
|
||||||
|
|
||||||
def tdd_substitute(%TDD{decision: {:var, name}} = tdd, env) when is_map(env) do
|
|
||||||
# If var 'name' is in env, substitute it. Otherwise, keep the var.
|
|
||||||
Map.get(env, name, tdd)
|
|
||||||
end
|
|
||||||
|
|
||||||
def tdd_substitute(%TDD{decision: {:is_function_sig, params, ret_type}} = tdd, env) do
|
|
||||||
# Substitute within the signature parts
|
|
||||||
substituted_params = Enum.map(params, &tdd_substitute(&1, env))
|
|
||||||
substituted_ret_type = tdd_substitute(ret_type, env)
|
|
||||||
|
|
||||||
# Reconstruct the TDD node, keeping yes/no/maybe branches as they are fixed for this predicate.
|
|
||||||
# Note: If canonicalization (mk_tdd) were used, this would go through it.
|
|
||||||
%TDD{tdd | decision: {:is_function_sig, substituted_params, substituted_ret_type}}
|
|
||||||
end
|
|
||||||
|
|
||||||
def tdd_substitute(%TDD{decision: {:key, key_type_tdd}} = tdd, env) do
|
|
||||||
# Substitute within the key type TDD
|
|
||||||
substituted_key_type = tdd_substitute(key_type_tdd, env)
|
|
||||||
%TDD{tdd | decision: {:key, substituted_key_type}}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Generic case for other decisions: substitute in branches
|
|
||||||
def tdd_substitute(%TDD{} = tdd, env) do
|
|
||||||
%TDD{
|
|
||||||
# Assume decision itself doesn't contain substitutable vars unless handled above
|
|
||||||
decision: tdd.decision,
|
|
||||||
yes: tdd_substitute(tdd.yes, env),
|
|
||||||
no: tdd_substitute(tdd.no, env),
|
|
||||||
maybe: tdd_substitute(tdd.maybe, env)
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Performs substitution in a polymorphic type's body,
|
|
||||||
using the provided `env` (var_name -> TDD).
|
|
||||||
This substitutes *free* variables in the PolyTDD's body, not its quantified variables.
|
|
||||||
To instantiate quantified variables, use `Tilly.Inference.instantiate/3`.
|
|
||||||
"""
|
|
||||||
def poly_substitute_free_vars(%PolyTDD{vars: _quantified_vars, body: body} = poly_tdd, env) do
|
|
||||||
# We only substitute variables in the body that are NOT the quantified ones.
|
|
||||||
# `env` should ideally not contain keys that are names of quantified variables of this PolyTDD.
|
|
||||||
# For simplicity, if env has a quantified var name, it will be shadowed by the quantified var itself.
|
|
||||||
# A more robust approach might filter env based on quantified_vars.
|
|
||||||
substituted_body = tdd_substitute(body, env)
|
|
||||||
%PolyTDD{poly_tdd | body: substituted_body}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Finds all free type variable names in a TDD."
|
|
||||||
def free_vars(:any), do: MapSet.new()
|
|
||||||
def free_vars(:none), do: MapSet.new()
|
|
||||||
|
|
||||||
def free_vars(%TDD{decision: {:var, name}}) do
|
|
||||||
MapSet.new([name])
|
|
||||||
end
|
|
||||||
|
|
||||||
def free_vars(%TDD{decision: {:is_function_sig, params, ret_type}}) do
|
|
||||||
param_fvs = Enum.map(params, &free_vars/1) |> Enum.reduce(MapSet.new(), &MapSet.union/2)
|
|
||||||
ret_fvs = free_vars(ret_type)
|
|
||||||
MapSet.union(param_fvs, ret_fvs)
|
|
||||||
# Note: yes/no/maybe branches for this node are typically :any/:none or simple predicates,
|
|
||||||
# but if they could contain vars, they'd need to be included.
|
|
||||||
# Current tdd_function_sig has fixed branches.
|
|
||||||
end
|
|
||||||
|
|
||||||
def free_vars(%TDD{decision: {:key, key_type_tdd}}) do
|
|
||||||
free_vars(key_type_tdd)
|
|
||||||
# Similar note about yes/no/maybe branches.
|
|
||||||
end
|
|
||||||
|
|
||||||
def free_vars(%TDD{yes: yes, no: no, maybe: maybe}) do
|
|
||||||
MapSet.union(free_vars(yes), MapSet.union(free_vars(no), free_vars(maybe)))
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper for PolyTDD free vars (vars free in body that are not quantified)
|
|
||||||
def free_vars_in_poly_tdd_body(%PolyTDD{vars: quantified_vars_list, body: body}) do
|
|
||||||
quantified_names = Enum.map(quantified_vars_list, & &1.name) |> MapSet.new()
|
|
||||||
body_fvs = free_vars(body)
|
|
||||||
MapSet.difference(body_fvs, quantified_names)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defmodule Tilly.Inference do
|
|
||||||
alias Tilly.Type
|
|
||||||
alias Tilly.Type.{TDD, Var, PolyTDD, Constraint}
|
|
||||||
|
|
||||||
@typedoc "Type environment: maps variable names (atoms) to their types (TDD or PolyTDD)"
|
|
||||||
@type type_env :: %{atom() => TDD.t() | PolyTDD.t()}
|
|
||||||
|
|
||||||
@typedoc "Substitution map: maps type variable names (strings) to TDDs"
|
|
||||||
@type substitution :: %{String.t() => TDD.t()}
|
|
||||||
|
|
||||||
@typedoc "Constraints collected during inference: {type_var_name, constraint}"
|
|
||||||
@type collected_constraints :: [{String.t(), Constraint.t()}]
|
|
||||||
|
|
||||||
@typedoc """
|
|
||||||
Result of inference for an expression:
|
|
||||||
- inferred_type: The TDD or PolyTDD type of the expression.
|
|
||||||
- var_counter: The updated counter for generating fresh type variables.
|
|
||||||
- substitution: The accumulated substitution map.
|
|
||||||
- constraints: Constraints that need to be satisfied.
|
|
||||||
"""
|
|
||||||
@type infer_result ::
|
|
||||||
{inferred_type :: TDD.t() | PolyTDD.t(), var_counter :: non_neg_integer(),
|
|
||||||
substitution :: substitution(), constraints :: collected_constraints()}
|
|
||||||
|
|
||||||
# --- Helper for Fresh Type Variables ---
|
|
||||||
defmodule FreshVar do
|
|
||||||
@doc "Generates a new type variable name and increments the counter."
|
|
||||||
@spec next(non_neg_integer()) :: {String.t(), non_neg_integer()}
|
|
||||||
def next(counter) do
|
|
||||||
new_var_name = "~t" <> Integer.to_string(counter)
|
|
||||||
{new_var_name, counter + 1}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Core Inference Function ---
|
|
||||||
|
|
||||||
@doc "Infers the type of a Tilly expression."
|
|
||||||
@spec infer(
|
|
||||||
expr :: term(),
|
|
||||||
env :: type_env(),
|
|
||||||
var_counter :: non_neg_integer(),
|
|
||||||
sub :: substitution()
|
|
||||||
) ::
|
|
||||||
infer_result()
|
|
||||||
def infer({:lit, val}, _env, var_counter, sub) do
|
|
||||||
type =
|
|
||||||
cond do
|
|
||||||
# More precise: Type.tdd_literal(val)
|
|
||||||
is_atom(val) -> Type.tdd_pred(:is_atom)
|
|
||||||
# Type.tdd_literal(val)
|
|
||||||
is_integer(val) -> Type.tdd_pred(:is_integer)
|
|
||||||
# Type.tdd_literal(val)
|
|
||||||
is_float(val) -> Type.tdd_pred(:is_float)
|
|
||||||
# Type.tdd_literal(val)
|
|
||||||
is_binary(val) -> Type.tdd_pred(:is_binary)
|
|
||||||
# Add other literals as needed
|
|
||||||
# Fallback for other kinds of literals
|
|
||||||
true -> Type.tdd_literal(val)
|
|
||||||
end
|
|
||||||
|
|
||||||
{type, var_counter, sub, []}
|
|
||||||
end
|
|
||||||
|
|
||||||
def infer({:var, name}, env, var_counter, sub) when is_atom(name) do
|
|
||||||
case Map.get(env, name) do
|
|
||||||
nil ->
|
|
||||||
raise "Unbound variable: #{name}"
|
|
||||||
|
|
||||||
%TDD{} = tdd_type ->
|
|
||||||
{Type.tdd_substitute(tdd_type, sub), var_counter, sub, []}
|
|
||||||
|
|
||||||
%PolyTDD{} = poly_type ->
|
|
||||||
{instantiated_type, new_var_counter, new_constraints} =
|
|
||||||
instantiate(poly_type, var_counter)
|
|
||||||
|
|
||||||
# Apply current substitution to the instantiated type
|
|
||||||
# (in case fresh vars from instantiation are already in sub from elsewhere)
|
|
||||||
final_type = Type.tdd_substitute(instantiated_type, sub)
|
|
||||||
{final_type, new_var_counter, sub, new_constraints}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def infer({:fn, param_atoms, body_expr}, env, var_counter, sub) when is_list(param_atoms) do
|
|
||||||
# 1. Create fresh type variables for parameters
|
|
||||||
{param_tdd_vars, extended_env, counter_after_params} =
|
|
||||||
Enum.reduce(param_atoms, {[], env, var_counter}, fn param_name,
|
|
||||||
{vars_acc, env_acc, c_acc} ->
|
|
||||||
{fresh_var_name, next_c} = FreshVar.next(c_acc)
|
|
||||||
param_tdd_var = Type.tdd_var(fresh_var_name)
|
|
||||||
{[param_tdd_var | vars_acc], Map.put(env_acc, param_name, param_tdd_var), next_c}
|
|
||||||
end)
|
|
||||||
|
|
||||||
param_types = Enum.reverse(param_tdd_vars)
|
|
||||||
|
|
||||||
# 2. Infer body with extended environment and current substitution
|
|
||||||
{body_type_raw, counter_after_body, sub_after_body, body_constraints} =
|
|
||||||
infer(body_expr, extended_env, counter_after_params, sub)
|
|
||||||
|
|
||||||
# 3. Apply the substitution from body inference to parameter types
|
|
||||||
# This is because unification within the body might refine what the param types can be.
|
|
||||||
final_param_types = Enum.map(param_types, &Type.tdd_substitute(&1, sub_after_body))
|
|
||||||
# Already applied in infer usually
|
|
||||||
final_body_type = Type.tdd_substitute(body_type_raw, sub_after_body)
|
|
||||||
|
|
||||||
# 4. Construct function type
|
|
||||||
fun_type = Type.tdd_function_sig(final_param_types, final_body_type)
|
|
||||||
{fun_type, counter_after_body, sub_after_body, body_constraints}
|
|
||||||
end
|
|
||||||
|
|
||||||
def infer({:app, fun_expr, arg_exprs}, env, var_counter, sub) when is_list(arg_exprs) do
|
|
||||||
# 1. Infer function expression
|
|
||||||
{fun_type_raw, c1, s1, fun_constraints} = infer(fun_expr, env, var_counter, sub)
|
|
||||||
# Apply substitutions so far
|
|
||||||
fun_type_template = Type.tdd_substitute(fun_type_raw, s1)
|
|
||||||
|
|
||||||
# 2. Infer argument expressions
|
|
||||||
{arg_types_raw, c2, s2, args_constraints_lists} =
|
|
||||||
Enum.map_reduce(arg_exprs, {c1, s1}, fn arg_expr, {c_acc, s_acc} ->
|
|
||||||
{arg_t, next_c, next_s, arg_c} = infer(arg_expr, env, c_acc, s_acc)
|
|
||||||
# Pass along type and its constraints
|
|
||||||
{{arg_t, arg_c}, {next_c, next_s}}
|
|
||||||
end)
|
|
||||||
|
|
||||||
actual_arg_types = Enum.map(arg_types_raw, fn {t, _cs} -> Type.tdd_substitute(t, s2) end)
|
|
||||||
all_arg_constraints = Enum.flat_map(arg_types_raw, fn {_t, cs} -> cs end) ++ fun_constraints
|
|
||||||
|
|
||||||
# 3. Unify/Match function type with arguments
|
|
||||||
# `fun_type_template` is the type of the function (e.g., {:var, "~f"} or an actual fn_sig)
|
|
||||||
# `s2` is the current global substitution.
|
|
||||||
{return_type_final, c3, s3, unification_constraints} =
|
|
||||||
unify_apply(fun_type_template, actual_arg_types, c2, s2)
|
|
||||||
|
|
||||||
{return_type_final, c3, s3, all_arg_constraints ++ unification_constraints}
|
|
||||||
end
|
|
||||||
|
|
||||||
def infer({:let, [{var_name, val_expr}], body_expr}, env, var_counter, sub) do
|
|
||||||
# 1. Infer the type of the value expression
|
|
||||||
{val_type_raw, c1, s1, val_constraints} = infer(val_expr, env, var_counter, sub)
|
|
||||||
|
|
||||||
# 2. Apply current substitution and generalize the value's type
|
|
||||||
# Generalization happens *before* adding to env, over variables free in val_type but not env.
|
|
||||||
# The substitution `s1` contains all refinements up to this point.
|
|
||||||
val_type_substituted = Type.tdd_substitute(val_type_raw, s1)
|
|
||||||
generalized_val_type = generalize(val_type_substituted, env, s1)
|
|
||||||
|
|
||||||
# 3. Extend environment and infer body
|
|
||||||
extended_env = Map.put(env, var_name, generalized_val_type)
|
|
||||||
# Use s1 for body too
|
|
||||||
{body_type_raw, c2, s2, body_constraints} = infer(body_expr, extended_env, c1, s1)
|
|
||||||
|
|
||||||
# The final substitution s2 incorporates s1 and any changes from body.
|
|
||||||
# The final body_type is already substituted by s2.
|
|
||||||
{body_type_raw, c2, s2, val_constraints ++ body_constraints}
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Polymorphism: Instantiation and Generalization ---
|
|
||||||
|
|
||||||
@doc "Instantiates a polymorphic type scheme by replacing quantified variables with fresh ones."
|
|
||||||
def instantiate(%PolyTDD{vars: poly_vars_list, body: body_tdd}, var_counter) do
|
|
||||||
# Create substitution map from quantified vars to fresh vars
|
|
||||||
{substitution_to_fresh, new_var_counter, new_constraints} =
|
|
||||||
Enum.reduce(poly_vars_list, {%{}, var_counter, []}, fn %Var{
|
|
||||||
name: q_name,
|
|
||||||
constraints: q_constraints
|
|
||||||
},
|
|
||||||
{sub_acc, c_acc, cons_acc} ->
|
|
||||||
{fresh_name, next_c} = FreshVar.next(c_acc)
|
|
||||||
fresh_tdd_var = Type.tdd_var(fresh_name)
|
|
||||||
# Associate constraints of the quantified var with the new fresh var
|
|
||||||
# Tie constraint to fresh var name
|
|
||||||
fresh_var_constraints = Enum.map(q_constraints, &%Constraint{&1 | arg: fresh_name})
|
|
||||||
{Map.put(sub_acc, q_name, fresh_tdd_var), next_c, cons_acc ++ fresh_var_constraints}
|
|
||||||
end)
|
|
||||||
|
|
||||||
instantiated_body = Type.tdd_substitute(body_tdd, substitution_to_fresh)
|
|
||||||
{instantiated_body, new_var_counter, new_constraints}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Generalizes a TDD type into a PolyTDD if it has free variables not in the environment."
|
|
||||||
def generalize(type_tdd, env, current_sub) do
|
|
||||||
# Apply current substitution to resolve any vars in type_tdd that are already determined
|
|
||||||
type_to_generalize = Type.tdd_substitute(type_tdd, current_sub)
|
|
||||||
|
|
||||||
env_free_vars =
|
|
||||||
env
|
|
||||||
|> Map.values()
|
|
||||||
|> Enum.map(&apply_sub_and_get_free_vars(&1, current_sub))
|
|
||||||
|> Enum.reduce(MapSet.new(), &MapSet.union/2)
|
|
||||||
|
|
||||||
type_free_vars_set = Type.free_vars(type_to_generalize)
|
|
||||||
|
|
||||||
vars_to_quantify_names = MapSet.difference(type_free_vars_set, env_free_vars)
|
|
||||||
|
|
||||||
if MapSet.size(vars_to_quantify_names) == 0 do
|
|
||||||
# No variables to quantify, return as is
|
|
||||||
type_to_generalize
|
|
||||||
else
|
|
||||||
quantified_vars_structs =
|
|
||||||
Enum.map(MapSet.to_list(vars_to_quantify_names), fn var_name ->
|
|
||||||
# For now, generalized variables have no attached constraints here.
|
|
||||||
# Constraints arise from usage and are checked later.
|
|
||||||
%Var{name: var_name, constraints: []}
|
|
||||||
end)
|
|
||||||
|
|
||||||
%PolyTDD{vars: quantified_vars_structs, body: type_to_generalize}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp apply_sub_and_get_free_vars(%TDD{} = tdd, sub) do
|
|
||||||
Type.tdd_substitute(tdd, sub) |> Type.free_vars()
|
|
||||||
end
|
|
||||||
|
|
||||||
defp apply_sub_and_get_free_vars(%PolyTDD{} = poly_tdd, sub) do
|
|
||||||
# For a PolyTDD in the env, we care about its free variables *after* substitution,
|
|
||||||
# excluding its own quantified variables.
|
|
||||||
# Substitutes free vars in body
|
|
||||||
Type.poly_substitute_free_vars(poly_tdd, sub)
|
|
||||||
|> Type.free_vars_in_poly_tdd_body()
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Unification (Simplified for now) ---
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Constrains variables in t1 and t2 to be compatible and updates the substitution.
|
|
||||||
If t1 is Var(~a) and t2 is Type T, then ~a's bound becomes current_bound(~a) & T.
|
|
||||||
If t1 and t2 are concrete, checks their intersection isn't None.
|
|
||||||
Returns new substitution. Throws on error.
|
|
||||||
"""
|
|
||||||
def constrain_and_update_sub(raw_t1, raw_t2, sub) do
|
|
||||||
# IO.inspect({:constrain_start, raw_t1, raw_t2, sub}, label: "CONSTRAIN")
|
|
||||||
t1 = tdd_substitute(raw_t1, sub)
|
|
||||||
t2 = tdd_substitute(raw_t2, sub)
|
|
||||||
# IO.inspect({:constrain_applied, t1, t2}, label: "CONSTRAIN")
|
|
||||||
|
|
||||||
cond do
|
|
||||||
# Identical or one is Any (Any & T = T, so effectively no new constraint on T if T is a var already refined from Any)
|
|
||||||
t1 == t2 ->
|
|
||||||
sub
|
|
||||||
|
|
||||||
# Effectively constrains t2 if it's a var
|
|
||||||
t1 == Type.tdd_any() ->
|
|
||||||
constrain_var_with_type(t2, t1, sub)
|
|
||||||
|
|
||||||
# Effectively constrains t1 if it's a var
|
|
||||||
t2 == Type.tdd_any() ->
|
|
||||||
constrain_var_with_type(t1, t2, sub)
|
|
||||||
|
|
||||||
# Case 1: t1 is a variable
|
|
||||||
%TDD{decision: {:var, v_name1}} = t1 ->
|
|
||||||
update_var_bound(v_name1, t2, sub, raw_t1, raw_t2)
|
|
||||||
|
|
||||||
# Case 2: t2 is a variable (and t1 is not)
|
|
||||||
%TDD{decision: {:var, v_name2}} = t2 ->
|
|
||||||
# Note order for error message
|
|
||||||
update_var_bound(v_name2, t1, sub, raw_t2, raw_t1)
|
|
||||||
|
|
||||||
# Case 3: Both are function signatures (concrete)
|
|
||||||
%TDD{decision: {:is_function_sig, params1, ret1}} = t1,
|
|
||||||
%TDD{decision: {:is_function_sig, params2, ret2}} = t2 ->
|
|
||||||
if length(params1) != length(params2) do
|
|
||||||
raise "Type error (constrain): Function arity mismatch between #{inspect(t1)} and #{inspect(t2)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# For two function *types* to be compatible/substitutable, their parameters are contravariant, return is covariant.
|
|
||||||
# However, if we are "unifying" them to be *the same type structure*, then params are covariant.
|
|
||||||
# Let's assume for now `constrain_and_update_sub` implies they should be "equal or compatible via intersection".
|
|
||||||
# This means their intersection should be non-None, and vars within them get constrained.
|
|
||||||
|
|
||||||
sub_after_params =
|
|
||||||
Enum.zip(params1, params2)
|
|
||||||
|> Enum.reduce(sub, fn {p1, p2}, acc_sub ->
|
|
||||||
# Params are "unified" directly
|
|
||||||
constrain_and_update_sub(p1, p2, acc_sub)
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Return types are "unified" directly
|
|
||||||
constrain_and_update_sub(ret1, ret2, sub_after_params)
|
|
||||||
|
|
||||||
# TODO: Add cases for Tuples, Lists, TDDMap
|
|
||||||
# For tuples: length must match, then constrain_and_update_sub elements pairwise.
|
|
||||||
# %TDD{decision: {:is_tuple, len1}, yes: elements_tdd1} ...
|
|
||||||
# This requires TDDs to encode tuple elements more directly if we want to unify structurally.
|
|
||||||
# Current TDD for tuple is just {:tuple_len, N} or general :is_tuple. We need richer TDDs for structural unification.
|
|
||||||
# For now, this fallback will handle simple tuple predicates.
|
|
||||||
|
|
||||||
# Case 4: Other concrete types.
|
|
||||||
true ->
|
|
||||||
intersection = tdd_and(t1, t2)
|
|
||||||
|
|
||||||
if intersection == Type.tdd_none() do
|
|
||||||
raise "Type error (constrain): Types #{inspect(t1)} (from #{inspect(raw_t1)}) and #{inspect(t2)} (from #{inspect(raw_t2)}) are incompatible (intersection is empty). Current sub: #{inspect(sub)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# If they are concrete and compatible, `sub` is unchanged at this level.
|
|
||||||
sub
|
|
||||||
end
|
|
||||||
|
|
||||||
defp constrain_var_with_type(%TDD{decision: {:var, v_name}} = var_tdd, other_type, sub) do
|
|
||||||
# raw_t1, raw_t2 are for error msg context
|
|
||||||
update_var_bound(v_name, other_type, sub, var_tdd, other_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
# No var, no sub change here
|
|
||||||
defp constrain_var_with_type(_concrete_type, _other_type, sub), do: sub
|
|
||||||
|
|
||||||
defp update_var_bound(v_name, constraining_type, sub, raw_var_form, raw_constraining_form) do
|
|
||||||
# Default to Any
|
|
||||||
current_bound_v = Map.get(sub, v_name, Type.tdd_any())
|
|
||||||
new_bound_v = Type.tdd_and(current_bound_v, constraining_type)
|
|
||||||
|
|
||||||
if new_bound_v == Type.tdd_none() do
|
|
||||||
original_var_constraint_str =
|
|
||||||
if raw_var_form != constraining_type,
|
|
||||||
do: "(from unifying with #{inspect(raw_constraining_form)})",
|
|
||||||
else: ""
|
|
||||||
|
|
||||||
raise "Type error: Constraining variable #{v_name} with #{inspect(constraining_type)} #{original_var_constraint_str} results in an empty type. Previous bound: #{inspect(current_bound_v)}. Current sub: #{inspect(sub)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
Map.put(sub, v_name, new_bound_v)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Handles the application of a function type to actual argument types.
|
|
||||||
`fun_type_template` is the (possibly variable) type of the function.
|
|
||||||
`actual_arg_types` are the TDDs of the arguments.
|
|
||||||
`var_counter` and `sub` are current state.
|
|
||||||
Returns `{final_return_type, new_counter, new_sub, new_constraints}`.
|
|
||||||
"""
|
|
||||||
def unify_apply(fun_type_template, actual_arg_types, var_counter, sub) do
|
|
||||||
# Apply current substitutions to fun_type_template
|
|
||||||
current_fun_type = Type.tdd_substitute(fun_type_template, sub)
|
|
||||||
|
|
||||||
case current_fun_type do
|
|
||||||
%TDD{decision: {:var, fun_var_name}} ->
|
|
||||||
# Function is a type variable. We need to unify it with a newly minted function signature.
|
|
||||||
{param_var_tds, c1} =
|
|
||||||
Enum.map_reduce(actual_arg_types, var_counter, fn _arg, c_acc ->
|
|
||||||
{fresh_name, next_c} = FreshVar.next(c_acc)
|
|
||||||
{Type.tdd_var(fresh_name), next_c}
|
|
||||||
end)
|
|
||||||
|
|
||||||
{return_var_name, c2} = FreshVar.next(c1)
|
|
||||||
return_var_tdd = Type.tdd_var(return_var_name)
|
|
||||||
|
|
||||||
# The new signature that fun_var_name must conform to
|
|
||||||
synthetic_fun_sig_tdd = Type.tdd_function_sig(param_var_tds, return_var_tdd)
|
|
||||||
|
|
||||||
# Unify the function variable with this synthetic signature
|
|
||||||
{s1, cons1} = unify(current_fun_type, synthetic_fun_sig_tdd, sub)
|
|
||||||
|
|
||||||
# Now unify actual arguments with the fresh parameter type variables
|
|
||||||
{s2, cons2_list} =
|
|
||||||
Enum.zip(actual_arg_types, param_var_tds)
|
|
||||||
|> Enum.reduce({s1, []}, fn {actual_arg_t, param_var_t}, {s_acc, c_acc_list} ->
|
|
||||||
{next_s, next_cs} = unify(actual_arg_t, param_var_t, s_acc)
|
|
||||||
{next_s, [next_cs | c_acc_list]}
|
|
||||||
end)
|
|
||||||
|
|
||||||
final_return_type = Type.tdd_substitute(return_var_tdd, s2)
|
|
||||||
{final_return_type, c2, s2, cons1 ++ List.flatten(cons2_list)}
|
|
||||||
|
|
||||||
%TDD{decision: {:is_function_sig, expected_param_types, expected_return_type}} ->
|
|
||||||
# Function is a known signature.
|
|
||||||
if length(actual_arg_types) != length(expected_param_types) do
|
|
||||||
raise "Arity mismatch: expected #{length(expected_param_types)}, got #{length(actual_arg_types)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Unify actual arguments with expected parameter types
|
|
||||||
{s1, constraints_from_params_list} =
|
|
||||||
Enum.zip(actual_arg_types, expected_param_types)
|
|
||||||
|> Enum.reduce({sub, []}, fn {actual_arg_t, expected_param_t}, {s_acc, c_acc_list} ->
|
|
||||||
{next_s, param_cs} = unify(actual_arg_t, expected_param_t, s_acc)
|
|
||||||
{next_s, [param_cs | c_acc_list]}
|
|
||||||
end)
|
|
||||||
|
|
||||||
final_return_type = Type.tdd_substitute(expected_return_type, s1)
|
|
||||||
{final_return_type, var_counter, s1, List.flatten(constraints_from_params_list)}
|
|
||||||
|
|
||||||
other_type ->
|
|
||||||
raise "Type error: expected a function, but got #{inspect(other_type)}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc "Top-level type checking function for a Tilly program (list of expressions)."
|
|
||||||
def typecheck_program(exprs, initial_env \\ %{}) do
|
|
||||||
# For a program, we can infer each top-level expression.
|
|
||||||
# For `def`s, they would add to the environment.
|
|
||||||
# For now, let's just infer a single expression.
|
|
||||||
# A real program would involve modules, defs, etc.
|
|
||||||
initial_var_counter = 0
|
|
||||||
initial_substitution = %{}
|
|
||||||
|
|
||||||
# This is a simplified entry point, inferring a single expression
|
|
||||||
# A full program checker would iterate, manage top-level defs, etc.
|
|
||||||
if is_list(exprs) and Enum.count(exprs) == 1 do
|
|
||||||
[main_expr] = exprs
|
|
||||||
|
|
||||||
{raw_type, _counter, final_sub, constraints} =
|
|
||||||
infer(main_expr, initial_env, initial_var_counter, initial_substitution)
|
|
||||||
|
|
||||||
final_type = Type.tdd_substitute(raw_type, final_sub)
|
|
||||||
# Here, you would solve/check `constraints` using `final_sub`
|
|
||||||
# For example:
|
|
||||||
Enum.each(constraints, fn {var_name, constraint_obj} ->
|
|
||||||
var_final_type = Map.get(final_sub, var_name, Type.tdd_var(var_name))
|
|
||||||
|
|
||||||
unless Type.satisfies_constraint?(var_final_type, constraint_obj) do
|
|
||||||
raise "Constraint #{inspect(constraint_obj)} not satisfied for #{var_name} (type #{inspect(var_final_type)})"
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
{:ok, final_type, final_sub}
|
|
||||||
else
|
|
||||||
# Placeholder for multi-expression program handling
|
|
||||||
{:error, "Program must be a single expression for now"}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
504
lib/til/typer.ex
504
lib/til/typer.ex
@ -1,504 +0,0 @@
|
|||||||
defmodule Til.Typer do
|
|
||||||
@moduledoc """
|
|
||||||
Handles type checking and type inference for the Tilly Lisp dialect.
|
|
||||||
It processes the AST (Node Maps) generated by the parser and annotates
|
|
||||||
nodes with their inferred or checked types.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# alias Til.AstUtils # Removed as it's not used yet and causes a warning.
|
|
||||||
# alias MapSet, as: Set # No longer directly used here, moved to specialized modules
|
|
||||||
|
|
||||||
alias Til.Typer.Types
|
|
||||||
alias Til.Typer.Interner
|
|
||||||
alias Til.Typer.ExpressionTyper
|
|
||||||
# alias Til.Typer.SubtypeChecker # Not directly used in this module after refactor
|
|
||||||
alias Til.Typer.Environment
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Performs type checking and inference on a map of AST nodes.
|
|
||||||
|
|
||||||
It iterates through the nodes, infers their types, and updates the
|
|
||||||
`:type_id` field in each node map with a reference to its type.
|
|
||||||
|
|
||||||
Returns a new map of nodes with type information.
|
|
||||||
"""
|
|
||||||
def type_check(nodes_map) when is_map(nodes_map) do
|
|
||||||
initial_env = %{}
|
|
||||||
pre_populated_nodes_map = Interner.populate_known_types(nodes_map)
|
|
||||||
|
|
||||||
# Find the main file node to start traversal.
|
|
||||||
# Assumes parser always generates a :file node as the root of top-level expressions.
|
|
||||||
case Enum.find(Map.values(pre_populated_nodes_map), &(&1.ast_node_type == :file)) do
|
|
||||||
nil ->
|
|
||||||
# Should not happen with current parser, but handle defensively.
|
|
||||||
# Or an error: {:error, :no_file_node_found}
|
|
||||||
# Return map with known types at least
|
|
||||||
{:ok, pre_populated_nodes_map}
|
|
||||||
|
|
||||||
file_node ->
|
|
||||||
# Start recursive typing from the file node.
|
|
||||||
# The environment modifications will propagate through the traversal.
|
|
||||||
# The result is {:ok, final_nodes_map, _final_env}. We only need final_nodes_map here.
|
|
||||||
case type_node_recursively(file_node.id, pre_populated_nodes_map, initial_env) do
|
|
||||||
{:ok, final_nodes_map, final_env} ->
|
|
||||||
# IO.inspect(final_env, label: "Final Environment after Typing (should show type keys)")
|
|
||||||
|
|
||||||
# IO.inspect(final_nodes_map, label: "Final Nodes Map (should contain type definitions)")
|
|
||||||
{:ok, final_nodes_map}
|
|
||||||
|
|
||||||
# Propagate other return values (e.g., errors) if they occur,
|
|
||||||
# though current implementation of type_node_recursively always returns {:ok, _, _}.
|
|
||||||
other_result ->
|
|
||||||
other_result
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Main recursive function for typing nodes.
|
|
||||||
# Handles node lookup and delegates to do_type_node for actual processing.
|
|
||||||
defp type_node_recursively(node_id, nodes_map, env) do
|
|
||||||
case Map.get(nodes_map, node_id) do
|
|
||||||
nil ->
|
|
||||||
# This case should ideally not be reached if node_ids are always valid.
|
|
||||||
# Consider logging an error here.
|
|
||||||
# IO.inspect("Warning: Node ID #{node_id} not found in nodes_map during typing.", label: "Typer")
|
|
||||||
# No change if node_id is invalid
|
|
||||||
{:ok, nodes_map, env}
|
|
||||||
|
|
||||||
node_data ->
|
|
||||||
# Delegate to the worker function that processes the node.
|
|
||||||
do_type_node(node_data, nodes_map, env)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Worker function to process a single node.
|
|
||||||
# Orchestrates typing children, inferring current node's type, and updating environment.
|
|
||||||
defp do_type_node(node_data, nodes_map, env) do
|
|
||||||
# Determine the environment and children to type based on node type
|
|
||||||
{children_to_process_ids, env_for_children, nodes_map_after_pre_processing} =
|
|
||||||
if node_data.ast_node_type == :lambda_expression do
|
|
||||||
# For lambdas: (fn params_s_expr body...)
|
|
||||||
# The 'fn' symbol (child 0) and 'params_s_expr' (child 1) are typed with the outer env.
|
|
||||||
# The body_node_ids are typed with the inner lambda_body_env.
|
|
||||||
|
|
||||||
# Type 'fn' symbol (first child of the original S-expression)
|
|
||||||
fn_op_child_id = hd(node_data.children)
|
|
||||||
|
|
||||||
{:ok, nmap_after_fn_op, env_after_fn_op} =
|
|
||||||
type_node_recursively(fn_op_child_id, nodes_map, env)
|
|
||||||
|
|
||||||
# Type params_s_expr (second child of the original S-expression)
|
|
||||||
# This node (node_data) has `params_s_expr_id` from the parser.
|
|
||||||
params_s_expr_node_id = node_data.params_s_expr_id
|
|
||||||
|
|
||||||
{:ok, nmap_after_params_s_expr, env_after_params_s_expr} =
|
|
||||||
type_node_recursively(params_s_expr_node_id, nmap_after_fn_op, env_after_fn_op)
|
|
||||||
|
|
||||||
# Create lambda body environment using arg_spec_node_ids.
|
|
||||||
# The lambda_expression node has `arg_spec_node_ids` and `return_type_spec_node_id`.
|
|
||||||
# Argument types need to be resolved and interned here to populate the env.
|
|
||||||
# nodes_map is nmap_after_params_s_expr at this point.
|
|
||||||
{lambda_body_env, nmap_after_arg_type_resolution} =
|
|
||||||
Enum.reduce(
|
|
||||||
node_data.arg_spec_node_ids,
|
|
||||||
{env_after_params_s_expr, nmap_after_params_s_expr},
|
|
||||||
fn arg_spec_id, {acc_env, acc_nodes_map} ->
|
|
||||||
arg_spec_node = Map.get(acc_nodes_map, arg_spec_id)
|
|
||||||
|
|
||||||
case arg_spec_node.ast_node_type do
|
|
||||||
# Unannotated param, e.g., x
|
|
||||||
:symbol ->
|
|
||||||
param_name = arg_spec_node.name
|
|
||||||
param_type_key = Types.primitive_type_key(:any)
|
|
||||||
{Map.put(acc_env, param_name, param_type_key), acc_nodes_map}
|
|
||||||
|
|
||||||
# Annotated param, e.g., (x integer)
|
|
||||||
:s_expression ->
|
|
||||||
param_symbol_node_id = hd(arg_spec_node.children)
|
|
||||||
type_spec_node_id = hd(tl(arg_spec_node.children))
|
|
||||||
|
|
||||||
param_symbol_node = Map.get(acc_nodes_map, param_symbol_node_id)
|
|
||||||
type_spec_node = Map.get(acc_nodes_map, type_spec_node_id)
|
|
||||||
|
|
||||||
param_name = param_symbol_node.name
|
|
||||||
|
|
||||||
# Resolve and intern the type specifier
|
|
||||||
{raw_type_def, nmap_after_resolve} =
|
|
||||||
ExpressionTyper.resolve_type_specifier_node(type_spec_node, acc_nodes_map)
|
|
||||||
|
|
||||||
{param_type_key, nmap_after_intern} =
|
|
||||||
Interner.get_or_intern_type(raw_type_def, nmap_after_resolve)
|
|
||||||
|
|
||||||
{Map.put(acc_env, param_name, param_type_key), nmap_after_intern}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
# Children to process with this new env are the body_node_ids
|
|
||||||
{node_data.body_node_ids, lambda_body_env, nmap_after_arg_type_resolution}
|
|
||||||
else
|
|
||||||
# Default: type all children with the current environment
|
|
||||||
{Map.get(node_data, :children, []), env, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
# 1. Recursively type the identified children with the determined environment.
|
|
||||||
{nodes_map_after_children, env_after_children} =
|
|
||||||
Enum.reduce(
|
|
||||||
children_to_process_ids,
|
|
||||||
{nodes_map_after_pre_processing, env_for_children},
|
|
||||||
fn child_id, {acc_nodes_map, acc_env} ->
|
|
||||||
{:ok, next_nodes_map, next_env} =
|
|
||||||
type_node_recursively(child_id, acc_nodes_map, acc_env)
|
|
||||||
|
|
||||||
{next_nodes_map, next_env}
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
# Retrieve the current node's data from the potentially updated nodes_map.
|
|
||||||
# More importantly, infer_type_for_node_ast needs the nodes_map_after_children
|
|
||||||
# to look up typed children.
|
|
||||||
current_node_from_map = Map.get(nodes_map_after_children, node_data.id)
|
|
||||||
|
|
||||||
# 2. Infer type for the current node.
|
|
||||||
# infer_type_for_node_ast now returns {type_definition_map, possibly_updated_nodes_map}.
|
|
||||||
{type_definition_for_current_node, nodes_map_after_inference_logic} =
|
|
||||||
infer_type_for_node_ast(
|
|
||||||
current_node_from_map,
|
|
||||||
nodes_map_after_children,
|
|
||||||
env_after_children
|
|
||||||
)
|
|
||||||
|
|
||||||
# Intern this type definition to get a key and update nodes_map.
|
|
||||||
{type_key_for_current_node, nodes_map_after_interning} =
|
|
||||||
Interner.get_or_intern_type(
|
|
||||||
type_definition_for_current_node,
|
|
||||||
nodes_map_after_inference_logic
|
|
||||||
)
|
|
||||||
|
|
||||||
# Update current node with the type key.
|
|
||||||
# Ensure we are updating the version of the node from nodes_map_after_interning
|
|
||||||
# (which is based on nodes_map_after_children).
|
|
||||||
re_fetched_current_node_data = Map.get(nodes_map_after_interning, current_node_from_map.id)
|
|
||||||
|
|
||||||
updated_current_node =
|
|
||||||
Map.put(re_fetched_current_node_data, :type_id, type_key_for_current_node)
|
|
||||||
|
|
||||||
nodes_map_with_typed_node =
|
|
||||||
Map.put(nodes_map_after_interning, updated_current_node.id, updated_current_node)
|
|
||||||
|
|
||||||
# 3. Update environment based on the current typed node (e.g., for assignments).
|
|
||||||
# update_env_from_node now returns {updated_env, updated_nodes_map}.
|
|
||||||
{env_after_current_node, nodes_map_after_env_update} =
|
|
||||||
Environment.update_env_from_node(
|
|
||||||
updated_current_node,
|
|
||||||
nodes_map_with_typed_node,
|
|
||||||
env_after_children
|
|
||||||
)
|
|
||||||
|
|
||||||
{:ok, nodes_map_after_env_update, env_after_current_node}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Infers the type for a node based on its AST type and current environment.
|
|
||||||
# `nodes_map` contains potentially typed children (whose :type_id is a key) and canonical type definitions.
|
|
||||||
# `env` is the current typing environment (symbol names to type keys).
|
|
||||||
# Returns {type_definition_map, possibly_updated_nodes_map}.
|
|
||||||
defp infer_type_for_node_ast(node_data, nodes_map, env) do
|
|
||||||
case node_data.ast_node_type do
|
|
||||||
:literal_integer ->
|
|
||||||
{%{type_kind: :literal, value: node_data.value}, nodes_map}
|
|
||||||
|
|
||||||
:literal_string ->
|
|
||||||
{%{type_kind: :literal, value: node_data.value}, nodes_map}
|
|
||||||
|
|
||||||
# Atoms are parsed as :literal_atom with a :value field containing the Elixir atom (as per parser.ex)
|
|
||||||
:literal_atom ->
|
|
||||||
{%{type_kind: :literal, value: node_data.value}, nodes_map}
|
|
||||||
|
|
||||||
:symbol ->
|
|
||||||
case node_data.name do
|
|
||||||
"nil" ->
|
|
||||||
{Types.get_literal_type(:nil_atom), nodes_map}
|
|
||||||
|
|
||||||
"true" ->
|
|
||||||
{Types.get_literal_type(:true_atom), nodes_map}
|
|
||||||
|
|
||||||
"false" ->
|
|
||||||
{Types.get_literal_type(:false_atom), nodes_map}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# Look up symbol in the environment. env stores type keys.
|
|
||||||
case Map.get(env, node_data.name) do
|
|
||||||
nil ->
|
|
||||||
# Symbol not found. Default to :any type definition.
|
|
||||||
# TODO: Handle unresolved symbols more robustly (e.g., specific error type).
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
|
|
||||||
found_type_key ->
|
|
||||||
# Resolve the key to its definition from nodes_map.
|
|
||||||
case Map.get(nodes_map, found_type_key) do
|
|
||||||
nil ->
|
|
||||||
# This indicates an inconsistency if a key from env isn't in nodes_map.
|
|
||||||
# Default to :any or an error type.
|
|
||||||
# IO.warn("Type key #{inspect(found_type_key)} for symbol '#{node_data.name}' not found in nodes_map.")
|
|
||||||
# Or a specific error type definition
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
|
|
||||||
type_definition ->
|
|
||||||
{type_definition, nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
:s_expression ->
|
|
||||||
ExpressionTyper.infer_s_expression_type(node_data, nodes_map, env)
|
|
||||||
|
|
||||||
:list_expression ->
|
|
||||||
children_ids = Map.get(node_data, :children, [])
|
|
||||||
num_children = length(children_ids)
|
|
||||||
|
|
||||||
element_type_definition =
|
|
||||||
cond do
|
|
||||||
num_children == 0 ->
|
|
||||||
Types.get_primitive_type(:nothing)
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Children are already typed. Get their type definitions.
|
|
||||||
child_type_defs =
|
|
||||||
Enum.map(children_ids, fn child_id ->
|
|
||||||
# nodes_map is nodes_map_after_children
|
|
||||||
child_node = Map.get(nodes_map, child_id)
|
|
||||||
type_key_for_child = child_node.type_id
|
|
||||||
|
|
||||||
# Resolve the type key to its definition.
|
|
||||||
type_def_for_child = Map.get(nodes_map, type_key_for_child)
|
|
||||||
|
|
||||||
if is_nil(type_def_for_child) do
|
|
||||||
# Fallback, should ideally not happen if children are correctly typed.
|
|
||||||
Types.get_primitive_type(:any)
|
|
||||||
else
|
|
||||||
type_def_for_child
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Determine a common element type.
|
|
||||||
distinct_child_type_defs = Enum.uniq(child_type_defs)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
length(distinct_child_type_defs) == 1 ->
|
|
||||||
# All elements effectively have the same type definition (e.g., [1, 1, 1] -> Literal 1).
|
|
||||||
List.first(distinct_child_type_defs)
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Form a union of the distinct child types.
|
|
||||||
# E.g., [1, 2, 3] -> (Union (Literal 1) (Literal 2) (Literal 3))
|
|
||||||
# E.g., [1, "a"] -> (Union (Literal 1) (Literal "a"))
|
|
||||||
# The types in distinct_child_type_defs are already resolved definitions.
|
|
||||||
# The interner will handle canonicalizing this union type.
|
|
||||||
%{type_kind: :union, types: MapSet.new(distinct_child_type_defs)}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
list_type_def = %{
|
|
||||||
type_kind: :list,
|
|
||||||
# This is the full def; interner will use its key.
|
|
||||||
element_type: element_type_definition,
|
|
||||||
length: num_children
|
|
||||||
}
|
|
||||||
|
|
||||||
{list_type_def, nodes_map}
|
|
||||||
|
|
||||||
:file ->
|
|
||||||
# The :file node itself doesn't have a typical "type".
|
|
||||||
{Types.get_special_type(:file_marker), nodes_map}
|
|
||||||
|
|
||||||
:map_expression ->
|
|
||||||
children_ids = Map.get(node_data, :children, [])
|
|
||||||
# Children are [key1, value1, key2, value2, ...]
|
|
||||||
|
|
||||||
known_elements_raw =
|
|
||||||
children_ids
|
|
||||||
# [[k1,v1], [k2,v2]]
|
|
||||||
|> Enum.chunk_every(2)
|
|
||||||
|> Enum.reduce_while(%{}, fn [key_node_id, value_node_id], acc_known_elements ->
|
|
||||||
key_node = Map.get(nodes_map, key_node_id)
|
|
||||||
value_node = Map.get(nodes_map, value_node_id)
|
|
||||||
|
|
||||||
# Key's type must be a literal type for it to be used in known_elements.
|
|
||||||
# Child nodes (keys and values) are already typed at this stage.
|
|
||||||
key_type_def =
|
|
||||||
if key_node && key_node.type_id do
|
|
||||||
Map.get(nodes_map, key_node.type_id)
|
|
||||||
else
|
|
||||||
# Key node or its type_id is missing
|
|
||||||
nil
|
|
||||||
end
|
|
||||||
|
|
||||||
cond do
|
|
||||||
key_type_def && key_type_def.type_kind == :literal && value_node ->
|
|
||||||
literal_key_value = key_type_def.value
|
|
||||||
# Value node should have been typed, its type_id points to its definition
|
|
||||||
value_type_def =
|
|
||||||
Map.get(nodes_map, value_node.type_id, Types.get_primitive_type(:any))
|
|
||||||
|
|
||||||
updated_elements =
|
|
||||||
Map.put(
|
|
||||||
acc_known_elements,
|
|
||||||
literal_key_value,
|
|
||||||
%{value_type: value_type_def, optional: false}
|
|
||||||
)
|
|
||||||
|
|
||||||
{:cont, updated_elements}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# If a key's type is not a literal, or key/value nodes are missing,
|
|
||||||
# this map literal cannot be precisely typed with known_elements.
|
|
||||||
# Halt and return empty known_elements, leading to a less specific type.
|
|
||||||
# IO.warn(
|
|
||||||
# "Map literal key is not a literal type or node data missing. Key node: #{inspect(key_node)}, Key type: #{inspect(key_type_def)}"
|
|
||||||
# )
|
|
||||||
{:halt, %{}}
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Default index signature for map literals: any other key maps to any value.
|
|
||||||
default_index_signature = %{
|
|
||||||
key_type: Types.get_primitive_type(:any),
|
|
||||||
value_type: Types.get_primitive_type(:any)
|
|
||||||
}
|
|
||||||
|
|
||||||
map_type_def = %{
|
|
||||||
type_kind: :map,
|
|
||||||
known_elements: known_elements_raw,
|
|
||||||
index_signature: default_index_signature
|
|
||||||
}
|
|
||||||
|
|
||||||
{map_type_def, nodes_map}
|
|
||||||
|
|
||||||
:tuple_expression ->
|
|
||||||
children_ids = Map.get(node_data, :children, [])
|
|
||||||
|
|
||||||
element_type_defs =
|
|
||||||
Enum.map(children_ids, fn child_id ->
|
|
||||||
# nodes_map is nodes_map_after_children
|
|
||||||
child_node = Map.get(nodes_map, child_id)
|
|
||||||
# This should be set from prior typing.
|
|
||||||
type_key_for_child = child_node.type_id
|
|
||||||
|
|
||||||
# Resolve the type key to its definition.
|
|
||||||
type_def_for_child = Map.get(nodes_map, type_key_for_child)
|
|
||||||
|
|
||||||
if is_nil(type_def_for_child) do
|
|
||||||
# This case indicates an internal inconsistency:
|
|
||||||
# a child node has a type_id, but that ID doesn't resolve to a type definition.
|
|
||||||
# This shouldn't happen in a correctly functioning typer.
|
|
||||||
# Fallback to :any for robustness, but log or signal error if possible.
|
|
||||||
# IO.warn("Tuple element #{child_id} (in node #{node_data.id}) has type_id #{type_key_for_child} but no definition in nodes_map.")
|
|
||||||
Types.get_primitive_type(:any)
|
|
||||||
else
|
|
||||||
type_def_for_child
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
tuple_type_def = %{type_kind: :tuple, element_types: element_type_defs}
|
|
||||||
# nodes_map is unchanged here; interning of this new tuple_type_def happens later.
|
|
||||||
{tuple_type_def, nodes_map}
|
|
||||||
|
|
||||||
:lambda_expression ->
|
|
||||||
# node_data is the :lambda_expression node.
|
|
||||||
# Its body_node_ids have been typed using the lambda_body_env.
|
|
||||||
# nodes_map is nodes_map_after_children.
|
|
||||||
|
|
||||||
# Resolve argument types for the function signature
|
|
||||||
{raw_arg_type_defs, nodes_map_after_args} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
node_data.arg_spec_node_ids,
|
|
||||||
# This is nodes_map_after_children from do_type_node
|
|
||||||
nodes_map,
|
|
||||||
fn arg_spec_id, acc_nodes_map ->
|
|
||||||
arg_spec_node = Map.get(acc_nodes_map, arg_spec_id)
|
|
||||||
|
|
||||||
case arg_spec_node.ast_node_type do
|
|
||||||
# Unannotated param
|
|
||||||
:symbol ->
|
|
||||||
{Types.get_primitive_type(:any), acc_nodes_map}
|
|
||||||
|
|
||||||
# Annotated param (param_symbol type_spec)
|
|
||||||
:s_expression ->
|
|
||||||
type_spec_node_id = hd(tl(arg_spec_node.children))
|
|
||||||
type_spec_node = Map.get(acc_nodes_map, type_spec_node_id)
|
|
||||||
ExpressionTyper.resolve_type_specifier_node(type_spec_node, acc_nodes_map)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
# Resolve/Infer return type for the function signature
|
|
||||||
{return_type_def_for_signature, nodes_map_after_return} =
|
|
||||||
if node_data.return_type_spec_node_id do
|
|
||||||
# Explicit return type annotation
|
|
||||||
return_type_spec_node =
|
|
||||||
Map.get(nodes_map_after_args, node_data.return_type_spec_node_id)
|
|
||||||
|
|
||||||
{expected_return_raw_def, nmap_after_ret_resolve} =
|
|
||||||
ExpressionTyper.resolve_type_specifier_node(
|
|
||||||
return_type_spec_node,
|
|
||||||
nodes_map_after_args
|
|
||||||
)
|
|
||||||
|
|
||||||
# Intern the expected return type to get its canonical form for checks
|
|
||||||
{expected_return_key, nmap_after_ret_intern} =
|
|
||||||
Interner.get_or_intern_type(expected_return_raw_def, nmap_after_ret_resolve)
|
|
||||||
|
|
||||||
expected_return_interned_def = Map.get(nmap_after_ret_intern, expected_return_key)
|
|
||||||
|
|
||||||
# Check if actual body return type is subtype of annotated return type
|
|
||||||
_actual_body_return_interned_def =
|
|
||||||
if Enum.empty?(node_data.body_node_ids) do
|
|
||||||
# Raw, but interner handles it
|
|
||||||
Types.get_literal_type(:nil_atom)
|
|
||||||
else
|
|
||||||
last_body_expr_node =
|
|
||||||
Map.get(nmap_after_ret_intern, List.last(node_data.body_node_ids))
|
|
||||||
|
|
||||||
# Already interned
|
|
||||||
Map.get(nmap_after_ret_intern, last_body_expr_node.type_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Perform subtype check if needed (for error reporting, not changing signature type yet)
|
|
||||||
# if !SubtypeChecker.is_subtype?(_actual_body_return_interned_def, expected_return_interned_def, nmap_after_ret_intern) do
|
|
||||||
# IO.warn("Lambda body return type mismatch with annotation.") # Placeholder for error
|
|
||||||
# end
|
|
||||||
|
|
||||||
{expected_return_interned_def, nmap_after_ret_intern}
|
|
||||||
else
|
|
||||||
# Infer return type from body
|
|
||||||
inferred_return_def =
|
|
||||||
if Enum.empty?(node_data.body_node_ids) do
|
|
||||||
Types.get_literal_type(:nil_atom)
|
|
||||||
else
|
|
||||||
last_body_expr_node =
|
|
||||||
Map.get(nodes_map_after_args, List.last(node_data.body_node_ids))
|
|
||||||
|
|
||||||
# Already interned
|
|
||||||
Map.get(nodes_map_after_args, last_body_expr_node.type_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
{inferred_return_def, nodes_map_after_args}
|
|
||||||
end
|
|
||||||
|
|
||||||
function_type_raw_def = %{
|
|
||||||
type_kind: :function,
|
|
||||||
arg_types: raw_arg_type_defs,
|
|
||||||
# This is an interned def or raw primitive/literal
|
|
||||||
return_type: return_type_def_for_signature,
|
|
||||||
type_params: []
|
|
||||||
}
|
|
||||||
|
|
||||||
{function_type_raw_def, nodes_map_after_return}
|
|
||||||
|
|
||||||
# Default for other AST node types
|
|
||||||
_ ->
|
|
||||||
# Placeholder: return :any type definition.
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,59 +0,0 @@
|
|||||||
defmodule Til.Typer.Environment do
|
|
||||||
@moduledoc """
|
|
||||||
Manages updates to the typing environment (symbol name to type key mappings).
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Updates the environment based on the current typed node.
|
|
||||||
# Returns {updated_env, updated_nodes_map}. nodes_map is usually unchanged by this function
|
|
||||||
# unless a construct like (deftype ...) is processed that adds to type definitions.
|
|
||||||
def update_env_from_node(typed_node, nodes_map, env) do
|
|
||||||
case typed_node.ast_node_type do
|
|
||||||
:s_expression ->
|
|
||||||
new_env = update_env_for_s_expression(typed_node, nodes_map, env)
|
|
||||||
# nodes_map typically unchanged by simple assignment.
|
|
||||||
{new_env, nodes_map}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# Most nodes don't modify the environment by default.
|
|
||||||
{env, nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Handles environment updates for S-expressions (e.g., assignments).
|
|
||||||
# Returns the updated environment.
|
|
||||||
defp update_env_for_s_expression(s_expr_node, nodes_map, env) do
|
|
||||||
children_ids = Map.get(s_expr_node, :children, [])
|
|
||||||
|
|
||||||
# Basic check for assignment: (= symbol value), three children.
|
|
||||||
if length(children_ids) == 3 do
|
|
||||||
operator_node_id = List.first(children_ids)
|
|
||||||
operator_node = Map.get(nodes_map, operator_node_id)
|
|
||||||
|
|
||||||
if operator_node && operator_node.ast_node_type == :symbol && operator_node.name == "=" do
|
|
||||||
symbol_to_assign_id = Enum.at(children_ids, 1)
|
|
||||||
value_expr_id = Enum.at(children_ids, 2)
|
|
||||||
|
|
||||||
symbol_to_assign_node = Map.get(nodes_map, symbol_to_assign_id)
|
|
||||||
# This node is already typed.
|
|
||||||
value_expr_node = Map.get(nodes_map, value_expr_id)
|
|
||||||
|
|
||||||
if symbol_to_assign_node && symbol_to_assign_node.ast_node_type == :symbol &&
|
|
||||||
value_expr_node && value_expr_node.type_id do
|
|
||||||
# value_expr_node.type_id is the type key of the value.
|
|
||||||
type_key_of_value = value_expr_node.type_id
|
|
||||||
# Update environment with the new symbol binding (symbol name -> type key).
|
|
||||||
Map.put(env, symbol_to_assign_node.name, type_key_of_value)
|
|
||||||
else
|
|
||||||
# Malformed assignment or missing type info, no env change.
|
|
||||||
env
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Not an assignment, env unchanged by this s-expression itself.
|
|
||||||
env
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Not a 3-element s-expression, cannot be our simple assignment.
|
|
||||||
env
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,506 +0,0 @@
|
|||||||
defmodule Til.Typer.ExpressionTyper do
|
|
||||||
@moduledoc """
|
|
||||||
Handles type inference for specific AST expression types (e.g., S-expressions, if, the).
|
|
||||||
"""
|
|
||||||
alias Til.Typer.Types
|
|
||||||
alias Til.Typer.SubtypeChecker
|
|
||||||
alias Til.Typer.Interner
|
|
||||||
alias MapSet, as: Set
|
|
||||||
|
|
||||||
# Infers the type of an S-expression.
|
|
||||||
# Returns {type_definition_map, possibly_updated_nodes_map}.
|
|
||||||
def infer_s_expression_type(s_expr_node, nodes_map, env) do
|
|
||||||
children_ids = Map.get(s_expr_node, :children, [])
|
|
||||||
|
|
||||||
if Enum.empty?(children_ids) do
|
|
||||||
# Type of empty s-expression '()' is nil.
|
|
||||||
{Types.get_literal_type(:nil_atom), nodes_map}
|
|
||||||
else
|
|
||||||
operator_node_id = List.first(children_ids)
|
|
||||||
operator_node = Map.get(nodes_map, operator_node_id)
|
|
||||||
arg_node_ids = Enum.drop(children_ids, 1)
|
|
||||||
|
|
||||||
if is_nil(operator_node) do
|
|
||||||
# This should ideally not be reached if parser ensures children_ids are valid.
|
|
||||||
# Fallback or error type.
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
else
|
|
||||||
# Check if the operator is a symbol for a known special form first.
|
|
||||||
if operator_node.ast_node_type == :symbol do
|
|
||||||
case operator_node.name do
|
|
||||||
"=" ->
|
|
||||||
# Assignment: (= sym val) - expects 2 arguments (symbol and value)
|
|
||||||
# Total children_ids for s_expr_node should be 3.
|
|
||||||
if length(arg_node_ids) == 2 do
|
|
||||||
# value_expr_node is the second argument to '=', which is the third child of s_expr_node
|
|
||||||
value_expr_node = Map.get(nodes_map, Enum.at(children_ids, 2))
|
|
||||||
|
|
||||||
if value_expr_node && value_expr_node.type_id do
|
|
||||||
value_type_key = value_expr_node.type_id
|
|
||||||
case Map.get(nodes_map, value_type_key) do
|
|
||||||
nil ->
|
|
||||||
{Types.get_error_type_definition(:missing_value_type_in_assignment),
|
|
||||||
nodes_map}
|
|
||||||
value_type_definition ->
|
|
||||||
{value_type_definition, nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
{Types.get_error_type_definition(:missing_value_type_in_assignment), nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Malformed assignment (e.g. (= x) or (= x y z))
|
|
||||||
# TODO: Specific error type for malformed assignment arity
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
"if" ->
|
|
||||||
infer_if_expression_type(s_expr_node, nodes_map, env)
|
|
||||||
|
|
||||||
"the" ->
|
|
||||||
infer_the_expression_type(s_expr_node, nodes_map, env)
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# Not a special form symbol, attempt to treat as a regular function call.
|
|
||||||
type_function_call(operator_node, arg_node_ids, nodes_map)
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Operator is not a symbol (e.g., a lambda expression itself, or an S-expression like (if ...)).
|
|
||||||
# Attempt to treat as a regular function call.
|
|
||||||
type_function_call(operator_node, arg_node_ids, nodes_map)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper function to type a function call.
|
|
||||||
# operator_node is the node representing the function/operator.
|
|
||||||
# arg_node_ids is a list of IDs for the argument nodes.
|
|
||||||
# nodes_map contains all typed nodes and type definitions.
|
|
||||||
# Returns {type_definition_for_call, nodes_map}.
|
|
||||||
defp type_function_call(operator_node, arg_node_ids, nodes_map) do
|
|
||||||
operator_type_id = operator_node.type_id
|
|
||||||
# The operator_node should have been typed by the main Typer loop before this.
|
|
||||||
operator_type_def = if operator_type_id, do: Map.get(nodes_map, operator_type_id), else: nil
|
|
||||||
|
|
||||||
cond do
|
|
||||||
is_nil(operator_type_def) ->
|
|
||||||
# This means operator_type_id was nil (operator node not typed) or
|
|
||||||
# operator_type_id was not a valid key in nodes_map.
|
|
||||||
# This indicates an issue prior to attempting the call.
|
|
||||||
# For (1 2 3), '1' is typed as literal 1. Its type_id is valid.
|
|
||||||
# This path is more for unexpected states.
|
|
||||||
# Defaulting to :not_a_function with nil actual_operator_type_id.
|
|
||||||
{Types.get_error_type_definition(:not_a_function, nil), nodes_map}
|
|
||||||
|
|
||||||
operator_type_def.type_kind == :function ->
|
|
||||||
expected_arity = length(operator_type_def.arg_types)
|
|
||||||
actual_arity = length(arg_node_ids)
|
|
||||||
|
|
||||||
if expected_arity != actual_arity do
|
|
||||||
{Types.get_error_type_definition(
|
|
||||||
:arity_mismatch,
|
|
||||||
expected_arity,
|
|
||||||
actual_arity,
|
|
||||||
operator_type_id # key of the function type itself
|
|
||||||
), nodes_map}
|
|
||||||
else
|
|
||||||
# TODO: Implement argument type checking loop here when needed.
|
|
||||||
# For Phase 3, if lambdas take 'any' args, this check might not be strictly necessary
|
|
||||||
# for current tests, but the structure should be prepared.
|
|
||||||
# If an argument type mismatch is found:
|
|
||||||
# return {Types.get_error_type_definition(:argument_type_mismatch, ...), nodes_map}
|
|
||||||
|
|
||||||
# If arity matches and (for now) args are assumed compatible,
|
|
||||||
# the type of the call is the function's return type.
|
|
||||||
return_type_key = operator_type_def.return_type
|
|
||||||
return_type_def = Map.get(nodes_map, return_type_key)
|
|
||||||
|
|
||||||
if return_type_def do
|
|
||||||
{return_type_def, nodes_map}
|
|
||||||
else
|
|
||||||
# Return type key from function definition was invalid. Internal error.
|
|
||||||
# Fallback to :any or a specific error.
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Operator is typed, but its type_kind is not :function.
|
|
||||||
{Types.get_error_type_definition(
|
|
||||||
:not_a_function,
|
|
||||||
operator_type_id # key of the operator's actual (non-function) type
|
|
||||||
), nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Infers the type of a (the <type-specifier> <expression>) S-expression.
|
|
||||||
# Returns {type_definition_map, possibly_updated_nodes_map}.
|
|
||||||
def infer_the_expression_type(the_s_expr_node, nodes_map, _env) do
|
|
||||||
children_ids = Map.get(the_s_expr_node, :children, [])
|
|
||||||
# (the type-specifier actual-expression) -> 3 children
|
|
||||||
# children_ids are [the_op_id, type_spec_id, expr_id]
|
|
||||||
|
|
||||||
if length(children_ids) == 3 do
|
|
||||||
type_spec_node_id = Enum.at(children_ids, 1)
|
|
||||||
expr_node_id = Enum.at(children_ids, 2)
|
|
||||||
|
|
||||||
type_spec_node = Map.get(nodes_map, type_spec_node_id)
|
|
||||||
expr_node = Map.get(nodes_map, expr_node_id)
|
|
||||||
|
|
||||||
# Resolve the type specifier node (e.g., symbol 'integer') to a raw type definition
|
|
||||||
# nodes_map is the input to infer_the_expression_type
|
|
||||||
{raw_annotated_def, nodes_map_after_resolve} =
|
|
||||||
resolve_type_specifier_node(type_spec_node, nodes_map)
|
|
||||||
|
|
||||||
# Intern the resolved annotated type definition to get its canonical, interned form
|
|
||||||
{annotated_type_key, current_nodes_map} =
|
|
||||||
Interner.get_or_intern_type(raw_annotated_def, nodes_map_after_resolve)
|
|
||||||
|
|
||||||
# This is the interned definition, e.g. %{... element_type_id: ..., id: ...}
|
|
||||||
annotated_def = Map.get(current_nodes_map, annotated_type_key)
|
|
||||||
|
|
||||||
# The expr_node should have been typed by the recursive call to type_children.
|
|
||||||
# Its type_id points to its actual inferred type definition (which is already interned).
|
|
||||||
actual_expr_type_def =
|
|
||||||
if expr_node && expr_node.type_id do
|
|
||||||
# Fetch using the most up-to-date nodes_map
|
|
||||||
Map.get(current_nodes_map, expr_node.type_id)
|
|
||||||
else
|
|
||||||
nil
|
|
||||||
end
|
|
||||||
|
|
||||||
if expr_node && expr_node.type_id && actual_expr_type_def do
|
|
||||||
# Both actual_expr_type_def and annotated_def are now interned forms.
|
|
||||||
is_compatible =
|
|
||||||
SubtypeChecker.is_subtype?(
|
|
||||||
actual_expr_type_def,
|
|
||||||
# Use interned form
|
|
||||||
annotated_def,
|
|
||||||
# Use most up-to-date nodes_map
|
|
||||||
current_nodes_map
|
|
||||||
)
|
|
||||||
|
|
||||||
if is_compatible do
|
|
||||||
# The type of the 'the' expression is the raw annotated type.
|
|
||||||
# The caller (Typer.do_type_node) will intern this.
|
|
||||||
{raw_annotated_def, current_nodes_map}
|
|
||||||
else
|
|
||||||
# Type mismatch: actual type is not a subtype of the annotated type.
|
|
||||||
actual_id = actual_expr_type_def.id
|
|
||||||
# This is annotated_def.id
|
|
||||||
expected_id = annotated_type_key
|
|
||||||
|
|
||||||
{Types.get_error_type_definition(:type_annotation_mismatch, actual_id, expected_id),
|
|
||||||
current_nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# This 'else' covers cases where the inner expression could not be typed:
|
|
||||||
# 1. expr_node is nil
|
|
||||||
# 2. expr_node.type_id is nil
|
|
||||||
# 3. actual_expr_type_def (resolved from expr_node.type_id) is nil
|
|
||||||
# In these cases, the annotation cannot be validated. Return an error type.
|
|
||||||
# IO.warn("Could not determine actual type of expression in 'the' form: #{inspect(expr_node)}")
|
|
||||||
# actual_type_id is nil as it couldn't be determined. expected_type_id is from annotation.
|
|
||||||
# annotated_type_key might not be set if raw_annotated_def was an error
|
|
||||||
expected_id =
|
|
||||||
if annotated_type_key, do: annotated_type_key, else: nil
|
|
||||||
|
|
||||||
{Types.get_error_type_definition(:type_annotation_mismatch, nil, expected_id),
|
|
||||||
current_nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Malformed 'the' expression (e.g., wrong number of children).
|
|
||||||
# IO.warn("Malformed 'the' expression: #{the_s_expr_node.raw_string}")
|
|
||||||
# If a 'the' expression is malformed, both actual and expected types are indeterminate in this context.
|
|
||||||
# nodes_map is original from caller here
|
|
||||||
{Types.get_error_type_definition(:type_annotation_mismatch, nil, nil), nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Infers the type of an IF S-expression.
|
|
||||||
# Returns {type_definition_map, possibly_updated_nodes_map}.
|
|
||||||
def infer_if_expression_type(if_s_expr_node, nodes_map, _env) do
|
|
||||||
children_ids = Map.get(if_s_expr_node, :children, [])
|
|
||||||
num_children_in_sexpr = length(children_ids)
|
|
||||||
|
|
||||||
# Used as fallback
|
|
||||||
canonical_any_type = Map.get(nodes_map, Types.any_type_key())
|
|
||||||
canonical_nil_type = Map.get(nodes_map, Types.nil_literal_type_key())
|
|
||||||
_literal_true_type = Map.get(nodes_map, Types.literal_type_key(:true_atom))
|
|
||||||
_literal_false_type = Map.get(nodes_map, Types.literal_type_key(:false_atom))
|
|
||||||
|
|
||||||
# primitive_boolean_type = Map.get(nodes_map, Types.primitive_type_key(:boolean)) # If we add :boolean
|
|
||||||
|
|
||||||
# Check for malformed 'if' (less than 2 parts: 'if' and condition)
|
|
||||||
if num_children_in_sexpr < 2 do
|
|
||||||
# IO.warn("Malformed 'if' expression (too few parts): #{if_s_expr_node.raw_string}")
|
|
||||||
# Malformed if results in :any for now.
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
else
|
|
||||||
condition_id = Enum.at(children_ids, 1)
|
|
||||||
condition_node = Map.get(nodes_map, condition_id)
|
|
||||||
|
|
||||||
condition_type_def =
|
|
||||||
SubtypeChecker.get_type_definition_from_node(
|
|
||||||
condition_node,
|
|
||||||
nodes_map,
|
|
||||||
canonical_any_type
|
|
||||||
)
|
|
||||||
|
|
||||||
# Condition type is now evaluated based on truthiness/falsiness,
|
|
||||||
# not strict boolean type. The :invalid_if_condition_type error is removed.
|
|
||||||
# Proceed with branch typing based on static truthiness/falsiness.
|
|
||||||
cond do
|
|
||||||
# (if condition then_branch)
|
|
||||||
num_children_in_sexpr == 3 ->
|
|
||||||
then_branch_id = Enum.at(children_ids, 2)
|
|
||||||
then_branch_node = Map.get(nodes_map, then_branch_id)
|
|
||||||
|
|
||||||
then_type_def =
|
|
||||||
SubtypeChecker.get_type_definition_from_node(
|
|
||||||
then_branch_node,
|
|
||||||
nodes_map,
|
|
||||||
canonical_any_type
|
|
||||||
)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
SubtypeChecker.is_statically_truthy?(condition_type_def) ->
|
|
||||||
{then_type_def, nodes_map}
|
|
||||||
|
|
||||||
SubtypeChecker.is_statically_falsy?(condition_type_def) ->
|
|
||||||
# Implicit else is nil
|
|
||||||
{canonical_nil_type, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Condition is ambiguous, form union of then and nil
|
|
||||||
union_members = Set.new([then_type_def, canonical_nil_type])
|
|
||||||
|
|
||||||
if Set.size(union_members) == 1 do
|
|
||||||
{hd(Set.to_list(union_members)), nodes_map}
|
|
||||||
else
|
|
||||||
{%{type_kind: :union, types: union_members}, nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# (if condition then_branch else_branch)
|
|
||||||
num_children_in_sexpr == 4 ->
|
|
||||||
then_branch_id = Enum.at(children_ids, 2)
|
|
||||||
else_branch_id = Enum.at(children_ids, 3)
|
|
||||||
|
|
||||||
then_branch_node = Map.get(nodes_map, then_branch_id)
|
|
||||||
else_branch_node = Map.get(nodes_map, else_branch_id)
|
|
||||||
|
|
||||||
then_type_def =
|
|
||||||
SubtypeChecker.get_type_definition_from_node(
|
|
||||||
then_branch_node,
|
|
||||||
nodes_map,
|
|
||||||
canonical_any_type
|
|
||||||
)
|
|
||||||
|
|
||||||
else_type_def =
|
|
||||||
SubtypeChecker.get_type_definition_from_node(
|
|
||||||
else_branch_node,
|
|
||||||
nodes_map,
|
|
||||||
canonical_any_type
|
|
||||||
)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
SubtypeChecker.is_statically_truthy?(condition_type_def) ->
|
|
||||||
{then_type_def, nodes_map}
|
|
||||||
|
|
||||||
SubtypeChecker.is_statically_falsy?(condition_type_def) ->
|
|
||||||
{else_type_def, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Condition is ambiguous, form union of then and else
|
|
||||||
union_members = Set.new([then_type_def, else_type_def])
|
|
||||||
|
|
||||||
if Set.size(union_members) == 1 do
|
|
||||||
{hd(Set.to_list(union_members)), nodes_map}
|
|
||||||
else
|
|
||||||
{%{type_kind: :union, types: union_members}, nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Malformed 'if' (e.g. (if c t e extra) or already handled (if), (if c))
|
|
||||||
# IO.warn("Malformed 'if' expression (incorrect number of parts): #{if_s_expr_node.raw_string}")
|
|
||||||
# Malformed if results in :any
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
end
|
|
||||||
# Removed: end of 'if not is_valid_condition_type else ...'
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Resolves a type specifier AST node (e.g., a symbol like 'integer')
|
|
||||||
# to its corresponding type definition map.
|
|
||||||
# Returns {type_definition_map, nodes_map}. nodes_map can be updated if element type resolution updates it.
|
|
||||||
def resolve_type_specifier_node(type_spec_node, nodes_map) do
|
|
||||||
# Fallback for unknown type specifiers
|
|
||||||
default_type = Types.get_primitive_type(:any)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
type_spec_node && type_spec_node.ast_node_type == :symbol ->
|
|
||||||
type_name_str = type_spec_node.name
|
|
||||||
|
|
||||||
# Map common type names to their definitions.
|
|
||||||
case type_name_str do
|
|
||||||
"integer" ->
|
|
||||||
{Types.get_primitive_type(:integer), nodes_map}
|
|
||||||
|
|
||||||
"string" ->
|
|
||||||
{Types.get_primitive_type(:string), nodes_map}
|
|
||||||
|
|
||||||
"atom" ->
|
|
||||||
{Types.get_primitive_type(:atom), nodes_map}
|
|
||||||
|
|
||||||
"number" ->
|
|
||||||
{Types.get_primitive_type(:number), nodes_map}
|
|
||||||
|
|
||||||
"any" ->
|
|
||||||
{Types.get_primitive_type(:any), nodes_map}
|
|
||||||
|
|
||||||
"nothing" ->
|
|
||||||
{Types.get_primitive_type(:nothing), nodes_map}
|
|
||||||
|
|
||||||
# TODO: Add other built-in types like boolean, map, etc.
|
|
||||||
_ ->
|
|
||||||
# IO.warn("Unknown type specifier symbol: '#{type_name_str}', defaulting to :any.")
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_spec_node && type_spec_node.ast_node_type == :s_expression ->
|
|
||||||
# Handle S-expression type specifiers like (list ...), (map ...), (union ...)
|
|
||||||
s_expr_children_ids = Map.get(type_spec_node, :children, [])
|
|
||||||
|
|
||||||
if length(s_expr_children_ids) >= 1 do
|
|
||||||
op_node_id = List.first(s_expr_children_ids)
|
|
||||||
op_node = Map.get(nodes_map, op_node_id)
|
|
||||||
|
|
||||||
if op_node && op_node.ast_node_type == :symbol do
|
|
||||||
case op_node.name do
|
|
||||||
"list" ->
|
|
||||||
if length(s_expr_children_ids) == 2 do
|
|
||||||
element_type_spec_id = Enum.at(s_expr_children_ids, 1)
|
|
||||||
element_type_spec_node = Map.get(nodes_map, element_type_spec_id)
|
|
||||||
|
|
||||||
if element_type_spec_node do
|
|
||||||
# Recursively resolve the element type specifier
|
|
||||||
{resolved_element_type_def, nodes_map_after_element_resolve} =
|
|
||||||
resolve_type_specifier_node(element_type_spec_node, nodes_map)
|
|
||||||
|
|
||||||
list_type_def = %{
|
|
||||||
type_kind: :list,
|
|
||||||
element_type: resolved_element_type_def,
|
|
||||||
length: nil
|
|
||||||
}
|
|
||||||
|
|
||||||
{list_type_def, nodes_map_after_element_resolve}
|
|
||||||
else
|
|
||||||
# Malformed (list ...), missing element type specifier
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Malformed (list ...), wrong arity
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
"map" ->
|
|
||||||
if length(s_expr_children_ids) == 3 do
|
|
||||||
key_type_spec_id = Enum.at(s_expr_children_ids, 1)
|
|
||||||
value_type_spec_id = Enum.at(s_expr_children_ids, 2)
|
|
||||||
|
|
||||||
key_type_spec_node = Map.get(nodes_map, key_type_spec_id)
|
|
||||||
value_type_spec_node = Map.get(nodes_map, value_type_spec_id)
|
|
||||||
|
|
||||||
if key_type_spec_node && value_type_spec_node do
|
|
||||||
{resolved_key_type_def, nodes_map_after_key_resolve} =
|
|
||||||
resolve_type_specifier_node(key_type_spec_node, nodes_map)
|
|
||||||
|
|
||||||
{resolved_value_type_def, nodes_map_after_value_resolve} =
|
|
||||||
resolve_type_specifier_node(
|
|
||||||
value_type_spec_node,
|
|
||||||
nodes_map_after_key_resolve
|
|
||||||
)
|
|
||||||
|
|
||||||
map_type_def = %{
|
|
||||||
type_kind: :map,
|
|
||||||
known_elements: %{},
|
|
||||||
index_signature: %{
|
|
||||||
key_type: resolved_key_type_def,
|
|
||||||
value_type: resolved_value_type_def
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
{map_type_def, nodes_map_after_value_resolve}
|
|
||||||
else
|
|
||||||
# Malformed (map ...), missing key/value type specifiers
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Malformed (map ...), wrong arity
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
"union" ->
|
|
||||||
# (union typeA typeB ...)
|
|
||||||
member_type_spec_ids = Enum.drop(s_expr_children_ids, 1)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
length(member_type_spec_ids) == 0 ->
|
|
||||||
# (union) -> nothing
|
|
||||||
{Types.get_primitive_type(:nothing), nodes_map}
|
|
||||||
|
|
||||||
length(member_type_spec_ids) == 1 ->
|
|
||||||
# (union typeA) -> typeA
|
|
||||||
single_member_spec_node_id = List.first(member_type_spec_ids)
|
|
||||||
single_member_spec_node = Map.get(nodes_map, single_member_spec_node_id)
|
|
||||||
resolve_type_specifier_node(single_member_spec_node, nodes_map)
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# (union typeA typeB ...) -> resolve each and form union
|
|
||||||
{resolved_member_defs, final_nodes_map} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
member_type_spec_ids,
|
|
||||||
nodes_map,
|
|
||||||
fn member_id, acc_nodes_map ->
|
|
||||||
member_node = Map.get(acc_nodes_map, member_id)
|
|
||||||
|
|
||||||
{resolved_def, updated_nodes_map} =
|
|
||||||
resolve_type_specifier_node(member_node, acc_nodes_map)
|
|
||||||
|
|
||||||
{resolved_def, updated_nodes_map}
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
union_type_def = %{
|
|
||||||
type_kind: :union,
|
|
||||||
types: MapSet.new(resolved_member_defs)
|
|
||||||
}
|
|
||||||
|
|
||||||
{union_type_def, final_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# Unknown S-expression operator for type specifier
|
|
||||||
# IO.warn("Unknown S-expression type specifier operator: #{op_node.name}")
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# First child of S-expression type specifier is not a symbol
|
|
||||||
# IO.warn("S-expression type specifier does not start with a symbol: #{type_spec_node.raw_string}")
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Empty S-expression as type specifier `()` or malformed (e.g. `(list)`)
|
|
||||||
# IO.warn("Empty or malformed S-expression type specifier: #{type_spec_node.raw_string}")
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Type specifier is not a symbol, not a recognized s-expression, or is nil.
|
|
||||||
# IO.warn("Invalid type specifier node: #{inspect(type_spec_node)}, defaulting to :any.")
|
|
||||||
{default_type, nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,499 +0,0 @@
|
|||||||
defmodule Til.Typer.Interner do
|
|
||||||
@moduledoc """
|
|
||||||
Handles the interning of type definitions into a nodes_map.
|
|
||||||
Ensures that identical type definitions (especially predefined ones)
|
|
||||||
map to canonical keys.
|
|
||||||
"""
|
|
||||||
alias Til.Typer.Types
|
|
||||||
|
|
||||||
def populate_known_types(nodes_map) do
|
|
||||||
initial_map_with_primitives =
|
|
||||||
Enum.reduce(Types.primitive_types(), nodes_map, fn {name, def}, acc_map ->
|
|
||||||
key = Types.primitive_type_key(name)
|
|
||||||
Map.put(acc_map, key, Map.put(def, :id, key))
|
|
||||||
end)
|
|
||||||
|
|
||||||
map_with_literals =
|
|
||||||
Enum.reduce(Types.literal_types(), initial_map_with_primitives, fn {name, def}, acc_map ->
|
|
||||||
key = Types.literal_type_key(name)
|
|
||||||
Map.put(acc_map, key, Map.put(def, :id, key))
|
|
||||||
end)
|
|
||||||
|
|
||||||
initial_map_with_specials =
|
|
||||||
Enum.reduce(Types.special_types(), map_with_literals, fn {name, def}, acc_map ->
|
|
||||||
key = Types.special_type_key(name)
|
|
||||||
Map.put(acc_map, key, Map.put(def, :id, key))
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Filter out :type_annotation_mismatch as it's dynamic and not in error_type_keys_map anymore
|
|
||||||
# Pre-populate other static error types.
|
|
||||||
static_error_defs =
|
|
||||||
Types.error_type_definitions()
|
|
||||||
|> Enum.filter(fn {name, _def} ->
|
|
||||||
# Check if a key exists for this error name (type_annotation_mismatch won't have one)
|
|
||||||
Types.error_type_keys_map()[name] != nil
|
|
||||||
end)
|
|
||||||
|
|
||||||
Enum.reduce(static_error_defs, initial_map_with_specials, fn {name, def}, acc_map ->
|
|
||||||
key = Types.error_type_key(name) # This will only work for names still in error_type_keys_map
|
|
||||||
Map.put(acc_map, key, Map.put(def, :id, key))
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper to get a canonical key for a type definition, storing it in nodes_map if new.
|
|
||||||
def get_or_intern_type(type_definition_map, nodes_map) do
|
|
||||||
# Normalize incoming type_definition_map by removing :id for matching against base definitions
|
|
||||||
type_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
primitive_match =
|
|
||||||
Enum.find(Types.primitive_types(), fn {_name, def} -> def == type_def_for_matching end)
|
|
||||||
|
|
||||||
literal_match =
|
|
||||||
Enum.find(Types.literal_types(), fn {_name, def} -> def == type_def_for_matching end)
|
|
||||||
|
|
||||||
special_match =
|
|
||||||
Enum.find(Types.special_types(), fn {_name, def} -> def == type_def_for_matching end)
|
|
||||||
|
|
||||||
error_match =
|
|
||||||
Enum.find(Types.error_type_definitions(), fn {_name, def} ->
|
|
||||||
def == type_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
primitive_match ->
|
|
||||||
{name, _def} = primitive_match
|
|
||||||
{Types.primitive_type_key(name), nodes_map}
|
|
||||||
|
|
||||||
literal_match ->
|
|
||||||
{name, _def} = literal_match
|
|
||||||
{Types.literal_type_key(name), nodes_map}
|
|
||||||
|
|
||||||
special_match ->
|
|
||||||
{name, _def} = special_match
|
|
||||||
{Types.special_type_key(name), nodes_map}
|
|
||||||
|
|
||||||
# Handle specific error types first, then general error_match for predefined ones
|
|
||||||
type_definition_map.type_kind == :error and
|
|
||||||
type_definition_map.reason == :type_annotation_mismatch ->
|
|
||||||
# Dynamic error type: %{type_kind: :error, reason: :type_annotation_mismatch, actual_type_id: key1, expected_type_id: key2}
|
|
||||||
# Search for existing based on all relevant fields (excluding :id itself)
|
|
||||||
error_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
existing_error_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
# Ensure it's a type definition (has :type_kind) before checking its properties
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
existing_def.type_kind == :error &&
|
|
||||||
Map.delete(existing_def, :id) == error_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_error_match ->
|
|
||||||
{_key, existing_def} = existing_error_match
|
|
||||||
{existing_def.id, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# Create a new key for this specific instance of type_annotation_mismatch error
|
|
||||||
actual_id_str = to_string(type_definition_map.actual_type_id || "nil")
|
|
||||||
expected_id_str = to_string(type_definition_map.expected_type_id || "nil")
|
|
||||||
|
|
||||||
new_error_key =
|
|
||||||
:"type_error_tam_#{actual_id_str}_exp_#{expected_id_str}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
|
|
||||||
final_error_def = Map.put(type_definition_map, :id, new_error_key)
|
|
||||||
{new_error_key, Map.put(nodes_map, new_error_key, final_error_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :error and
|
|
||||||
type_definition_map.reason == :invalid_if_condition_type ->
|
|
||||||
# Dynamic error type: %{type_kind: :error, reason: :invalid_if_condition_type, actual_condition_type_id: key}
|
|
||||||
error_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
existing_error_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
existing_def.type_kind == :error &&
|
|
||||||
Map.delete(existing_def, :id) == error_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_error_match ->
|
|
||||||
{_key, existing_def} = existing_error_match
|
|
||||||
{existing_def.id, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
actual_id_str = to_string(type_definition_map.actual_condition_type_id || "nil")
|
|
||||||
|
|
||||||
new_error_key =
|
|
||||||
:"type_error_iict_#{actual_id_str}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
|
|
||||||
final_error_def = Map.put(type_definition_map, :id, new_error_key)
|
|
||||||
{new_error_key, Map.put(nodes_map, new_error_key, final_error_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :error and
|
|
||||||
type_definition_map.reason == :not_a_function ->
|
|
||||||
# Dynamic error type: %{type_kind: :error, reason: :not_a_function, actual_operator_type_id: key}
|
|
||||||
error_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
existing_error_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
existing_def.type_kind == :error &&
|
|
||||||
Map.delete(existing_def, :id) == error_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_error_match ->
|
|
||||||
{_key, existing_def} = existing_error_match
|
|
||||||
{existing_def.id, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
actual_op_id_str = to_string(type_definition_map.actual_operator_type_id || "nil")
|
|
||||||
|
|
||||||
new_error_key =
|
|
||||||
:"type_error_naf_#{actual_op_id_str}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
|
|
||||||
final_error_def = Map.put(type_definition_map, :id, new_error_key)
|
|
||||||
{new_error_key, Map.put(nodes_map, new_error_key, final_error_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :error and
|
|
||||||
type_definition_map.reason == :arity_mismatch ->
|
|
||||||
# Dynamic error type: %{type_kind: :error, reason: :arity_mismatch, expected_arity: int, actual_arity: int, function_type_id: key}
|
|
||||||
error_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
existing_error_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
existing_def.type_kind == :error &&
|
|
||||||
Map.delete(existing_def, :id) == error_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_error_match ->
|
|
||||||
{_key, existing_def} = existing_error_match
|
|
||||||
{existing_def.id, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
exp_arity_str = to_string(type_definition_map.expected_arity)
|
|
||||||
act_arity_str = to_string(type_definition_map.actual_arity)
|
|
||||||
func_id_str = to_string(type_definition_map.function_type_id || "nil")
|
|
||||||
|
|
||||||
new_error_key =
|
|
||||||
:"type_error_am_#{exp_arity_str}_#{act_arity_str}_#{func_id_str}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
|
|
||||||
final_error_def = Map.put(type_definition_map, :id, new_error_key)
|
|
||||||
{new_error_key, Map.put(nodes_map, new_error_key, final_error_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :error and
|
|
||||||
type_definition_map.reason == :argument_type_mismatch ->
|
|
||||||
# Dynamic error type: %{type_kind: :error, reason: :argument_type_mismatch, arg_position: int, expected_arg_type_id: key, actual_arg_type_id: key, function_type_id: key}
|
|
||||||
error_def_for_matching = Map.delete(type_definition_map, :id)
|
|
||||||
|
|
||||||
existing_error_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
existing_def.type_kind == :error &&
|
|
||||||
Map.delete(existing_def, :id) == error_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_error_match ->
|
|
||||||
{_key, existing_def} = existing_error_match
|
|
||||||
{existing_def.id, nodes_map}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
pos_str = to_string(type_definition_map.arg_position)
|
|
||||||
exp_id_str = to_string(type_definition_map.expected_arg_type_id || "nil")
|
|
||||||
act_id_str = to_string(type_definition_map.actual_arg_type_id || "nil")
|
|
||||||
func_id_str = to_string(type_definition_map.function_type_id || "nil")
|
|
||||||
|
|
||||||
new_error_key =
|
|
||||||
:"type_error_atm_#{pos_str}_#{exp_id_str}_#{act_id_str}_#{func_id_str}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
|
|
||||||
final_error_def = Map.put(type_definition_map, :id, new_error_key)
|
|
||||||
{new_error_key, Map.put(nodes_map, new_error_key, final_error_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
error_match -> # Handles other predefined errors
|
|
||||||
{name, _def} = error_match
|
|
||||||
{Types.error_type_key(name), nodes_map}
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :list ->
|
|
||||||
# type_definition_map is like %{type_kind: :list, element_type: <full_def_or_key>, length: L}
|
|
||||||
%{element_type: element_full_def_or_key, length: len} = type_definition_map
|
|
||||||
|
|
||||||
# Recursively get/intern the element type to get its key, if it's not already a key
|
|
||||||
{element_type_key, nodes_map_after_element_intern} =
|
|
||||||
if is_map(element_full_def_or_key) do
|
|
||||||
# It's a raw definition, intern it
|
|
||||||
get_or_intern_type(element_full_def_or_key, nodes_map)
|
|
||||||
else
|
|
||||||
# It's already a key
|
|
||||||
{element_full_def_or_key, nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Canonical form for searching/storing uses the element type's key
|
|
||||||
canonical_list_struct = %{
|
|
||||||
type_kind: :list,
|
|
||||||
element_type_id: element_type_key,
|
|
||||||
length: len
|
|
||||||
}
|
|
||||||
|
|
||||||
# Search for an existing identical list type definition
|
|
||||||
existing_list_type_match =
|
|
||||||
Enum.find(nodes_map_after_element_intern, fn {_key, existing_def} ->
|
|
||||||
# Compare structure, excluding the :id field of the existing_def
|
|
||||||
Map.delete(existing_def, :id) == canonical_list_struct
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_list_type_match ->
|
|
||||||
# Found an identical list type, reuse its key
|
|
||||||
{_key, existing_def} = existing_list_type_match
|
|
||||||
{existing_def.id, nodes_map_after_element_intern}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# No existing identical list type, create a new one
|
|
||||||
new_list_key = :"type_list_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
final_list_def = Map.put(canonical_list_struct, :id, new_list_key)
|
|
||||||
{new_list_key, Map.put(nodes_map_after_element_intern, new_list_key, final_list_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :union ->
|
|
||||||
# type_definition_map is %{type_kind: :union, types: set_of_raw_member_defs}
|
|
||||||
%{types: raw_member_defs_set} = type_definition_map
|
|
||||||
|
|
||||||
# Recursively get/intern each member type to get its interned definition.
|
|
||||||
# Thread nodes_map through these calls.
|
|
||||||
{interned_member_defs_list, nodes_map_after_members_intern} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
MapSet.to_list(raw_member_defs_set),
|
|
||||||
nodes_map,
|
|
||||||
fn raw_member_def_or_key, acc_nodes_map ->
|
|
||||||
if is_map(raw_member_def_or_key) do
|
|
||||||
# It's a map (raw definition or already interned definition).
|
|
||||||
# get_or_intern_type will handle it and return its key.
|
|
||||||
{member_key, updated_nodes_map} =
|
|
||||||
get_or_intern_type(raw_member_def_or_key, acc_nodes_map)
|
|
||||||
# Fetch the interned definition using the key for the canonical set.
|
|
||||||
interned_member_def = Map.get(updated_nodes_map, member_key)
|
|
||||||
{interned_member_def, updated_nodes_map}
|
|
||||||
else
|
|
||||||
# It's an atom (a key). Fetch its definition.
|
|
||||||
interned_member_def = Map.get(acc_nodes_map, raw_member_def_or_key)
|
|
||||||
|
|
||||||
if is_nil(interned_member_def) do
|
|
||||||
# This should not happen if keys are always valid.
|
|
||||||
raise "Interner: Union member key #{inspect(raw_member_def_or_key)} not found in nodes_map."
|
|
||||||
end
|
|
||||||
|
|
||||||
{interned_member_def, acc_nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
interned_member_defs_set = MapSet.new(interned_member_defs_list)
|
|
||||||
|
|
||||||
# Canonical form for searching/storing uses the set of interned member definitions.
|
|
||||||
canonical_union_struct = %{
|
|
||||||
type_kind: :union,
|
|
||||||
types: interned_member_defs_set
|
|
||||||
}
|
|
||||||
|
|
||||||
# Search for an existing identical union type definition.
|
|
||||||
existing_union_type_match =
|
|
||||||
Enum.find(nodes_map_after_members_intern, fn {_key, existing_def} ->
|
|
||||||
Map.delete(existing_def, :id) == canonical_union_struct
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_union_type_match ->
|
|
||||||
{_key, existing_def} = existing_union_type_match
|
|
||||||
{existing_def.id, nodes_map_after_members_intern}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
new_union_key = :"type_union_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
final_union_def = Map.put(canonical_union_struct, :id, new_union_key)
|
|
||||||
|
|
||||||
{new_union_key,
|
|
||||||
Map.put(nodes_map_after_members_intern, new_union_key, final_union_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :map ->
|
|
||||||
# type_definition_map is %{type_kind: :map, known_elements: KE_raw, index_signature: IS_raw}
|
|
||||||
%{known_elements: ke_raw, index_signature: is_raw} = type_definition_map
|
|
||||||
|
|
||||||
# Intern value types in known_elements
|
|
||||||
{ke_interned_values, nodes_map_after_ke_values} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
ke_raw,
|
|
||||||
nodes_map,
|
|
||||||
fn {_literal_key, %{value_type: raw_value_def_or_key, optional: opt}},
|
|
||||||
acc_nodes_map ->
|
|
||||||
{value_type_key, updated_nodes_map} =
|
|
||||||
if is_map(raw_value_def_or_key) do
|
|
||||||
get_or_intern_type(raw_value_def_or_key, acc_nodes_map)
|
|
||||||
else
|
|
||||||
{raw_value_def_or_key, acc_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
{%{value_type_id: value_type_key, optional: opt}, updated_nodes_map}
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
# Reconstruct known_elements with interned value_type_ids
|
|
||||||
ke_interned =
|
|
||||||
Enum.zip(Map.keys(ke_raw), ke_interned_values)
|
|
||||||
|> Map.new(fn {key, val_map} -> {key, val_map} end)
|
|
||||||
|
|
||||||
# Intern key_type and value_type in index_signature, if they are not already keys
|
|
||||||
{is_key_type_id, nodes_map_after_is_key} =
|
|
||||||
if is_map(is_raw.key_type) do
|
|
||||||
get_or_intern_type(is_raw.key_type, nodes_map_after_ke_values)
|
|
||||||
else
|
|
||||||
{is_raw.key_type, nodes_map_after_ke_values}
|
|
||||||
end
|
|
||||||
|
|
||||||
{is_value_type_id, nodes_map_after_is_value} =
|
|
||||||
if is_map(is_raw.value_type) do
|
|
||||||
get_or_intern_type(is_raw.value_type, nodes_map_after_is_key)
|
|
||||||
else
|
|
||||||
{is_raw.value_type, nodes_map_after_is_key}
|
|
||||||
end
|
|
||||||
|
|
||||||
is_interned = %{
|
|
||||||
key_type_id: is_key_type_id,
|
|
||||||
value_type_id: is_value_type_id
|
|
||||||
}
|
|
||||||
|
|
||||||
canonical_map_struct = %{
|
|
||||||
type_kind: :map,
|
|
||||||
known_elements: ke_interned,
|
|
||||||
index_signature: is_interned
|
|
||||||
}
|
|
||||||
|
|
||||||
# Search for an existing identical map type definition
|
|
||||||
existing_map_type_match =
|
|
||||||
Enum.find(nodes_map_after_is_value, fn {_key, existing_def} ->
|
|
||||||
Map.delete(existing_def, :id) == canonical_map_struct
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_map_type_match ->
|
|
||||||
{_key, existing_def} = existing_map_type_match
|
|
||||||
{existing_def.id, nodes_map_after_is_value}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
new_map_key = :"type_map_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
final_map_def = Map.put(canonical_map_struct, :id, new_map_key)
|
|
||||||
{new_map_key, Map.put(nodes_map_after_is_value, new_map_key, final_map_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
type_definition_map.type_kind == :function ->
|
|
||||||
# type_definition_map is %{type_kind: :function, arg_types: [RawDef1, RawDef2], return_type: RawReturnDef, type_params: [RawParamDef1]}
|
|
||||||
%{
|
|
||||||
arg_types: raw_arg_defs_or_keys,
|
|
||||||
return_type: raw_return_def_or_key,
|
|
||||||
type_params: raw_type_param_defs_or_keys
|
|
||||||
} = type_definition_map
|
|
||||||
|
|
||||||
# Intern argument types
|
|
||||||
{interned_arg_type_keys, nodes_map_after_args} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
raw_arg_defs_or_keys,
|
|
||||||
nodes_map,
|
|
||||||
fn def_or_key, acc_nodes_map ->
|
|
||||||
if is_map(def_or_key) do
|
|
||||||
# It's a raw definition, intern it
|
|
||||||
get_or_intern_type(def_or_key, acc_nodes_map)
|
|
||||||
else
|
|
||||||
# It's already a key
|
|
||||||
{def_or_key, acc_nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
# Intern return type
|
|
||||||
{interned_return_type_key, nodes_map_after_return} =
|
|
||||||
if is_map(raw_return_def_or_key) do
|
|
||||||
# It's a raw definition, intern it
|
|
||||||
get_or_intern_type(raw_return_def_or_key, nodes_map_after_args)
|
|
||||||
else
|
|
||||||
# It's already a key
|
|
||||||
{raw_return_def_or_key, nodes_map_after_args}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Intern type parameters (for polymorphism, initially likely empty)
|
|
||||||
{interned_type_param_keys, nodes_map_after_params} =
|
|
||||||
Enum.map_reduce(
|
|
||||||
raw_type_param_defs_or_keys || [], # Ensure it's a list
|
|
||||||
nodes_map_after_return,
|
|
||||||
fn def_or_key, acc_nodes_map ->
|
|
||||||
if is_map(def_or_key) do
|
|
||||||
get_or_intern_type(def_or_key, acc_nodes_map)
|
|
||||||
else
|
|
||||||
{def_or_key, acc_nodes_map}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
)
|
|
||||||
|
|
||||||
canonical_function_struct = %{
|
|
||||||
type_kind: :function,
|
|
||||||
arg_types: interned_arg_type_keys,
|
|
||||||
return_type: interned_return_type_key,
|
|
||||||
type_params: interned_type_param_keys
|
|
||||||
}
|
|
||||||
|
|
||||||
# Search for an existing identical function type definition
|
|
||||||
existing_function_type_match =
|
|
||||||
Enum.find(nodes_map_after_params, fn {_key, existing_def} ->
|
|
||||||
Map.delete(existing_def, :id) == canonical_function_struct
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_function_type_match ->
|
|
||||||
{_key, existing_def} = existing_function_type_match
|
|
||||||
{existing_def.id, nodes_map_after_params}
|
|
||||||
|
|
||||||
true ->
|
|
||||||
new_function_key = :"type_function_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
final_function_def = Map.put(canonical_function_struct, :id, new_function_key)
|
|
||||||
{new_function_key, Map.put(nodes_map_after_params, new_function_key, final_function_def)}
|
|
||||||
end
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# This is for types not predefined and not list/union/map/function (e.g., literals like 1, "a", or other complex types).
|
|
||||||
# Search for an existing identical type definition.
|
|
||||||
# type_def_for_matching was defined at the start of the function: Map.delete(type_definition_map, :id)
|
|
||||||
existing_match =
|
|
||||||
Enum.find(nodes_map, fn {_key, existing_def} ->
|
|
||||||
# Ensure we are only comparing against actual type definitions
|
|
||||||
Map.has_key?(existing_def, :type_kind) &&
|
|
||||||
Map.delete(existing_def, :id) == type_def_for_matching
|
|
||||||
end)
|
|
||||||
|
|
||||||
cond do
|
|
||||||
existing_match ->
|
|
||||||
# Found an identical type, reuse its key.
|
|
||||||
{_key, existing_def_found} = existing_match
|
|
||||||
{existing_def_found.id, nodes_map} # Return existing key and original nodes_map
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# No existing identical type, create a new one.
|
|
||||||
# Make the key slightly more descriptive by including the type_kind.
|
|
||||||
kind_prefix = Atom.to_string(type_definition_map.type_kind)
|
|
||||||
|
|
||||||
new_key =
|
|
||||||
String.to_atom(
|
|
||||||
"type_#{kind_prefix}_#{System.unique_integer([:monotonic, :positive])}"
|
|
||||||
)
|
|
||||||
|
|
||||||
type_definition_with_id = Map.put(type_definition_map, :id, new_key)
|
|
||||||
{new_key, Map.put(nodes_map, new_key, type_definition_with_id)}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,355 +0,0 @@
|
|||||||
defmodule Til.Typer.SubtypeChecker do
|
|
||||||
@moduledoc """
|
|
||||||
Handles subtyping checks and related type utility functions.
|
|
||||||
"""
|
|
||||||
alias Til.Typer.Types
|
|
||||||
# alias MapSet, as: Set # MapSet functions are not directly called with `Set.` prefix here
|
|
||||||
|
|
||||||
# Helper to get a type definition from a node, with a fallback default.
|
|
||||||
# `default_type_definition` should be the actual map, not a key.
|
|
||||||
def get_type_definition_from_node(node, nodes_map, default_type_definition) do
|
|
||||||
if node && node.type_id && Map.has_key?(nodes_map, node.type_id) do
|
|
||||||
Map.get(nodes_map, node.type_id)
|
|
||||||
else
|
|
||||||
# Used if node is nil, node.type_id is nil, or type_id is not in nodes_map.
|
|
||||||
default_type_definition
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper to determine if a type definition guarantees a truthy value.
|
|
||||||
# In Tilly Lisp, nil and false are falsy; everything else is truthy.
|
|
||||||
def is_statically_truthy?(type_definition) do
|
|
||||||
case type_definition do
|
|
||||||
%{type_kind: :literal, value: val} ->
|
|
||||||
not (val == nil or val == false)
|
|
||||||
|
|
||||||
# Future: Other types that are guaranteed non-falsy (e.g., non-nullable string)
|
|
||||||
_ ->
|
|
||||||
# Cannot statically determine for :any, :union, etc. by default
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper to determine if a type definition guarantees a falsy value.
|
|
||||||
def is_statically_falsy?(type_definition) do
|
|
||||||
case type_definition do
|
|
||||||
%{type_kind: :literal, value: val} ->
|
|
||||||
val == nil or val == false
|
|
||||||
|
|
||||||
# Future: Other types that are guaranteed falsy (e.g. a union of only nil and false)
|
|
||||||
_ ->
|
|
||||||
# Cannot statically determine for :any, :union, etc. by default
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Checks if subtype_def is a subtype of supertype_def.
|
|
||||||
# nodes_map is needed for resolving further type information if types are complex.
|
|
||||||
def is_subtype?(subtype_def, supertype_def, nodes_map) do
|
|
||||||
# Fetch primitive types from nodes_map to ensure they include their :id field,
|
|
||||||
# consistent with other type definitions retrieved from nodes_map.
|
|
||||||
# Assumes nodes_map is populated with canonical types.
|
|
||||||
any_type = Map.get(nodes_map, Types.primitive_type_key(:any))
|
|
||||||
nothing_type = Map.get(nodes_map, Types.primitive_type_key(:nothing))
|
|
||||||
|
|
||||||
cond do
|
|
||||||
# Ensure any_type and nothing_type were found, otherwise subtyping involving them is problematic.
|
|
||||||
# This check is more of a safeguard; they should always be in a correctly initialized nodes_map.
|
|
||||||
is_nil(any_type) or is_nil(nothing_type) ->
|
|
||||||
# Consider this an internal error or handle as 'not subtype' for safety.
|
|
||||||
# For now, let it proceed; if they are nil, comparisons will likely fail as expected (false).
|
|
||||||
# However, this could mask initialization issues.
|
|
||||||
# A more robust approach might be to raise if they are not found.
|
|
||||||
# For tests and current flow, they are expected to be present.
|
|
||||||
# Placeholder for potential error logging/raising
|
|
||||||
nil
|
|
||||||
|
|
||||||
is_nil(subtype_def) or is_nil(supertype_def) ->
|
|
||||||
# Consider nil to be a subtype of nil, but not of others unless specified.
|
|
||||||
subtype_def == supertype_def
|
|
||||||
|
|
||||||
# Rule 1: Identity (covers identical complex types if canonicalized)
|
|
||||||
subtype_def == supertype_def ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Rule 2: Anything is a subtype of :any
|
|
||||||
supertype_def == any_type ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Rule 3: :nothing is a subtype of everything
|
|
||||||
subtype_def == nothing_type ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Rule 4: Literal to Primitive
|
|
||||||
# e.g., literal 42 is subtype of primitive integer
|
|
||||||
# e.g., literal 42 is subtype of primitive number
|
|
||||||
match?(
|
|
||||||
{%{type_kind: :literal, value: _val}, %{type_kind: :primitive, name: _prim_name}},
|
|
||||||
{subtype_def, supertype_def}
|
|
||||||
) ->
|
|
||||||
# Deconstruct inside the block for clarity if the pattern matches
|
|
||||||
{%{value: val}, %{name: prim_name}} = {subtype_def, supertype_def}
|
|
||||||
|
|
||||||
cond do
|
|
||||||
prim_name == :integer && is_integer(val) -> true
|
|
||||||
prim_name == :string && is_binary(val) -> true
|
|
||||||
prim_name == :atom && is_atom(val) -> true
|
|
||||||
prim_name == :number && (is_integer(val) || is_float(val)) -> true
|
|
||||||
# No specific literal to primitive match
|
|
||||||
true -> false
|
|
||||||
end
|
|
||||||
|
|
||||||
# Rule 5: Primitive to Primitive Subtyping
|
|
||||||
# e.g., integer is subtype of number
|
|
||||||
match?(
|
|
||||||
{%{type_kind: :primitive, name: :integer}, %{type_kind: :primitive, name: :number}},
|
|
||||||
{subtype_def, supertype_def}
|
|
||||||
) ->
|
|
||||||
true
|
|
||||||
|
|
||||||
# Rule 6: Handling Union Types
|
|
||||||
# Case 6.1: Subtype is a Union Type. (A | B) <: C iff A <: C and B <: C
|
|
||||||
is_map(subtype_def) and subtype_def.type_kind == :union ->
|
|
||||||
# subtype_def is %{type_kind: :union, types: sub_types_set}
|
|
||||||
sub_types_set = subtype_def.types
|
|
||||||
|
|
||||||
Enum.all?(sub_types_set, fn sub_member_type ->
|
|
||||||
is_subtype?(sub_member_type, supertype_def, nodes_map)
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Case 6.2: Supertype is a Union Type. A <: (B | C) iff A <: B or A <: C
|
|
||||||
is_map(supertype_def) and supertype_def.type_kind == :union ->
|
|
||||||
# supertype_def is %{type_kind: :union, types: super_types_set}
|
|
||||||
super_types_set = supertype_def.types
|
|
||||||
|
|
||||||
Enum.any?(super_types_set, fn super_member_type ->
|
|
||||||
is_subtype?(subtype_def, super_member_type, nodes_map)
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Rule 7: Tuple Subtyping
|
|
||||||
# A tuple type T_sub = %{element_types: SubElements} is a subtype of
|
|
||||||
# T_super = %{element_types: SuperElements} iff they have the same arity
|
|
||||||
# and each element type in SubElements is a subtype of the corresponding
|
|
||||||
# element type in SuperElements.
|
|
||||||
match?(
|
|
||||||
{%{type_kind: :tuple, element_types: _sub_elements},
|
|
||||||
%{type_kind: :tuple, element_types: _super_elements}},
|
|
||||||
{subtype_def, supertype_def}
|
|
||||||
) ->
|
|
||||||
sub_elements = subtype_def.element_types
|
|
||||||
super_elements = supertype_def.element_types
|
|
||||||
|
|
||||||
if length(sub_elements) == length(super_elements) do
|
|
||||||
# Check subtyping for each pair of corresponding elements.
|
|
||||||
Enum.zip(sub_elements, super_elements)
|
|
||||||
|> Enum.all?(fn {sub_element_type, super_element_type} ->
|
|
||||||
is_subtype?(sub_element_type, super_element_type, nodes_map)
|
|
||||||
end)
|
|
||||||
else
|
|
||||||
# Tuples have different arities, so not a subtype.
|
|
||||||
false
|
|
||||||
end
|
|
||||||
|
|
||||||
# Rule 8: List Subtyping
|
|
||||||
# L1 = (List E1 Len1) is subtype of L2 = (List E2 Len2) iff
|
|
||||||
# E1 is subtype of E2 (covariance) AND
|
|
||||||
# (Len2 is nil (any length) OR Len1 == Len2)
|
|
||||||
match?(
|
|
||||||
{%{type_kind: :list, element_type_id: _sub_elem_id, length: _sub_len},
|
|
||||||
%{type_kind: :list, element_type_id: _super_elem_id, length: _super_len}},
|
|
||||||
{subtype_def, supertype_def}
|
|
||||||
) ->
|
|
||||||
# Deconstruct for clarity
|
|
||||||
%{element_type_id: sub_elem_id, length: sub_len} = subtype_def
|
|
||||||
%{element_type_id: super_elem_id, length: super_len} = supertype_def
|
|
||||||
|
|
||||||
sub_elem_type_def = Map.get(nodes_map, sub_elem_id)
|
|
||||||
super_elem_type_def = Map.get(nodes_map, super_elem_id)
|
|
||||||
|
|
||||||
# Ensure element type definitions were found (keys were valid)
|
|
||||||
if sub_elem_type_def && super_elem_type_def do
|
|
||||||
elements_are_subtypes =
|
|
||||||
is_subtype?(sub_elem_type_def, super_elem_type_def, nodes_map)
|
|
||||||
|
|
||||||
# Supertype list can be any length
|
|
||||||
# Or lengths must be identical and known
|
|
||||||
lengths_compatible =
|
|
||||||
is_nil(super_len) or
|
|
||||||
(!is_nil(sub_len) and sub_len == super_len)
|
|
||||||
|
|
||||||
elements_are_subtypes && lengths_compatible
|
|
||||||
else
|
|
||||||
# If element type keys don't resolve, implies an issue. Treat as not subtype.
|
|
||||||
false
|
|
||||||
end
|
|
||||||
|
|
||||||
# Rule 9: Map Subtyping
|
|
||||||
match?(
|
|
||||||
{%{type_kind: :map, known_elements: _sub_ke, index_signature: _sub_is},
|
|
||||||
%{type_kind: :map, known_elements: _super_ke, index_signature: _super_is}},
|
|
||||||
{subtype_def, supertype_def}
|
|
||||||
) ->
|
|
||||||
sub_ke = subtype_def.known_elements
|
|
||||||
# %{key_type_id: Ksub, value_type_id: Vsub}
|
|
||||||
sub_is = subtype_def.index_signature
|
|
||||||
super_ke = supertype_def.known_elements
|
|
||||||
# %{key_type_id: Ksuper, value_type_id: Vsuper}
|
|
||||||
super_is = supertype_def.index_signature
|
|
||||||
|
|
||||||
# 1. Known Elements (Required in Supertype)
|
|
||||||
all_required_super_keys_compatible =
|
|
||||||
Enum.all?(super_ke, fn {super_key, super_key_details} ->
|
|
||||||
# super_key_details is %{value_type_id: super_val_id, optional: opt_val}
|
|
||||||
if super_key_details.optional == false do
|
|
||||||
case Map.get(sub_ke, super_key) do
|
|
||||||
nil ->
|
|
||||||
# Required key in supertype not found in subtype
|
|
||||||
false
|
|
||||||
|
|
||||||
sub_key_details when sub_key_details.optional == false ->
|
|
||||||
# Subtype's value for this key must be a subtype of supertype's value type
|
|
||||||
sub_val_type_def = Map.get(nodes_map, sub_key_details.value_type_id)
|
|
||||||
super_val_type_def = Map.get(nodes_map, super_key_details.value_type_id)
|
|
||||||
is_subtype?(sub_val_type_def, super_val_type_def, nodes_map)
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# Key found in subtype but is optional or missing, while supertype requires it non-optional
|
|
||||||
false
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# This specific super_key is optional, handled by the next block
|
|
||||||
true
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
# 2. Known Elements (Optional in Supertype)
|
|
||||||
all_optional_super_keys_compatible =
|
|
||||||
Enum.all?(super_ke, fn {super_key, super_key_details} ->
|
|
||||||
if super_key_details.optional == true do
|
|
||||||
case Map.get(sub_ke, super_key) do
|
|
||||||
nil ->
|
|
||||||
# Optional key in supertype not present in subtype is fine
|
|
||||||
true
|
|
||||||
|
|
||||||
# Optionality in subtype doesn't matter here for compatibility
|
|
||||||
sub_key_details ->
|
|
||||||
sub_val_type_def = Map.get(nodes_map, sub_key_details.value_type_id)
|
|
||||||
super_val_type_def = Map.get(nodes_map, super_key_details.value_type_id)
|
|
||||||
is_subtype?(sub_val_type_def, super_val_type_def, nodes_map)
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# This specific super_key is required, handled by the previous block
|
|
||||||
true
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
# 3. Index Signature Compatibility
|
|
||||||
# Ksuper <: Ksub (contravariance for key types)
|
|
||||||
# Vsub <: Vsuper (covariance for value types)
|
|
||||||
super_is_key_type_def = Map.get(nodes_map, super_is.key_type_id)
|
|
||||||
sub_is_key_type_def = Map.get(nodes_map, sub_is.key_type_id)
|
|
||||||
|
|
||||||
index_sig_keys_compatible =
|
|
||||||
is_subtype?(super_is_key_type_def, sub_is_key_type_def, nodes_map)
|
|
||||||
|
|
||||||
sub_is_value_type_def = Map.get(nodes_map, sub_is.value_type_id)
|
|
||||||
super_is_value_type_def = Map.get(nodes_map, super_is.value_type_id)
|
|
||||||
|
|
||||||
index_sig_values_compatible =
|
|
||||||
is_subtype?(sub_is_value_type_def, super_is_value_type_def, nodes_map)
|
|
||||||
|
|
||||||
index_signatures_compatible = index_sig_keys_compatible && index_sig_values_compatible
|
|
||||||
|
|
||||||
# 4. Width Subtyping: Keys in sub.known_elements not in super.known_elements
|
|
||||||
# must conform to super.index_signature.
|
|
||||||
extra_sub_keys_compatible =
|
|
||||||
Enum.all?(sub_ke, fn {sub_k_literal, %{value_type_id: sub_k_val_id}} ->
|
|
||||||
if Map.has_key?(super_ke, sub_k_literal) do
|
|
||||||
# Already checked by rules 1 and 2
|
|
||||||
true
|
|
||||||
else
|
|
||||||
# Key sub_k_literal is in sub_ke but not super_ke.
|
|
||||||
# Its type must conform to super_is.key_type_id
|
|
||||||
# Its value's type must conform to super_is.value_type_id
|
|
||||||
|
|
||||||
# Create a literal type for sub_k_literal to check against super_is.key_type_id
|
|
||||||
# This requires interning the literal type of sub_k_literal on the fly.
|
|
||||||
# For simplicity here, we assume sub_k_literal's type can be directly checked.
|
|
||||||
# A full implementation would intern {type_kind: :literal, value: sub_k_literal}
|
|
||||||
# and then use that definition.
|
|
||||||
# Let's assume for now that if sub_k_literal is e.g. :foo, its type is literal :foo.
|
|
||||||
|
|
||||||
# This part is tricky without a helper to get/intern literal type on the fly.
|
|
||||||
# For now, we'll simplify: if super_is.key_type_id is :any, it's fine.
|
|
||||||
# A proper check needs to construct the literal type for sub_k_literal.
|
|
||||||
# For now, let's assume this check is more complex and might need refinement.
|
|
||||||
# A placeholder for the check:
|
|
||||||
# literal_sub_k_type_def = create_and_intern_literal_type(sub_k_literal, nodes_map)
|
|
||||||
# key_conforms = is_subtype?(literal_sub_k_type_def, super_is_key_type_def, nodes_map)
|
|
||||||
|
|
||||||
# Simplified: if super_is key type is :any, it allows any extra keys.
|
|
||||||
# This is not fully correct but a step.
|
|
||||||
# key_conforms = super_is.key_type_id == Types.any_type_key() # Simplified check - REMOVED as unused
|
|
||||||
|
|
||||||
# Value type check
|
|
||||||
sub_k_val_type_def = Map.get(nodes_map, sub_k_val_id)
|
|
||||||
|
|
||||||
value_conforms =
|
|
||||||
is_subtype?(sub_k_val_type_def, super_is_value_type_def, nodes_map)
|
|
||||||
|
|
||||||
# A more accurate key_conforms:
|
|
||||||
# 1. Create raw literal type for sub_k_literal
|
|
||||||
# raw_lit_type = %{type_kind: :literal, value: sub_k_literal} # REMOVED as unused
|
|
||||||
# 2. Intern it (this might modify nodes_map, but is_subtype? shouldn't modify nodes_map)
|
|
||||||
# This is problematic. is_subtype? should be pure.
|
|
||||||
# The types passed to is_subtype? should already be fully resolved/interned.
|
|
||||||
# This implies that the literal keys themselves should perhaps be thought of as types.
|
|
||||||
# For now, we'll stick to the simplified check or assume this needs a helper
|
|
||||||
# that doesn't modify nodes_map or that nodes_map is pre-populated with all possible literal types.
|
|
||||||
|
|
||||||
# Let's refine the key_conforms check slightly.
|
|
||||||
# We need to check if the type of the literal key `sub_k_literal` is a subtype of `super_is_key_type_def`.
|
|
||||||
# We can construct the raw literal type and check it.
|
|
||||||
# This requires `is_subtype?` to handle raw type definitions for its first arg if not careful.
|
|
||||||
# However, `is_subtype?` expects canonical defs.
|
|
||||||
# This is a known challenge. For now, we'll assume a helper or a specific way to handle this.
|
|
||||||
# Let's assume `super_is_key_type_def` is general enough, e.g. :atom if sub_k_literal is an atom.
|
|
||||||
# This part needs the most careful thought for full correctness.
|
|
||||||
# A pragmatic approach: if super_is.key_type is :any, it's true.
|
|
||||||
# If super_is.key_type is :atom and sub_k_literal is an atom, true. etc.
|
|
||||||
key_type_of_sub_k_literal =
|
|
||||||
cond do
|
|
||||||
is_atom(sub_k_literal) ->
|
|
||||||
Map.get(nodes_map, Types.primitive_type_key(:atom))
|
|
||||||
|
|
||||||
is_binary(sub_k_literal) ->
|
|
||||||
Map.get(nodes_map, Types.primitive_type_key(:string))
|
|
||||||
|
|
||||||
is_integer(sub_k_literal) ->
|
|
||||||
Map.get(nodes_map, Types.primitive_type_key(:integer))
|
|
||||||
|
|
||||||
# Add other literal types if necessary
|
|
||||||
# Fallback
|
|
||||||
true ->
|
|
||||||
Map.get(nodes_map, Types.any_type_key())
|
|
||||||
end
|
|
||||||
|
|
||||||
key_conforms_refined =
|
|
||||||
is_subtype?(key_type_of_sub_k_literal, super_is_key_type_def, nodes_map)
|
|
||||||
|
|
||||||
key_conforms_refined && value_conforms
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
|
|
||||||
all_required_super_keys_compatible &&
|
|
||||||
all_optional_super_keys_compatible &&
|
|
||||||
index_signatures_compatible &&
|
|
||||||
extra_sub_keys_compatible
|
|
||||||
|
|
||||||
# TODO: Add more subtyping rules (e.g., for intersection)
|
|
||||||
true ->
|
|
||||||
# Default: not a subtype
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,192 +0,0 @@
|
|||||||
defmodule Til.Typer.Types do
|
|
||||||
@moduledoc """
|
|
||||||
Defines and provides access to predefined type structures and their canonical keys
|
|
||||||
for the Tilly type system.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# --- Predefined Type Keys ---
|
|
||||||
@integer_type_key :til_type_integer
|
|
||||||
@string_type_key :til_type_string
|
|
||||||
@atom_type_key :til_type_atom # Added for primitive atom type
|
|
||||||
@number_type_key :til_type_number
|
|
||||||
@any_type_key :til_type_any
|
|
||||||
@nothing_type_key :til_type_nothing
|
|
||||||
@file_marker_type_key :til_type_file_marker
|
|
||||||
@nil_literal_type_key :til_type_literal_nil_atom
|
|
||||||
@true_literal_type_key :til_type_literal_true_atom
|
|
||||||
@false_literal_type_key :til_type_literal_false_atom
|
|
||||||
@error_missing_value_type_in_assignment_key :til_type_error_missing_value_type_in_assignment
|
|
||||||
@error_not_a_function_key :til_type_error_not_a_function
|
|
||||||
@error_arity_mismatch_key :til_type_error_arity_mismatch
|
|
||||||
@error_argument_type_mismatch_key :til_type_error_argument_type_mismatch
|
|
||||||
# @error_invalid_if_condition_type_key :til_type_error_invalid_if_condition_type # Removed as unused
|
|
||||||
# @type_annotation_mismatch_error_key is removed as this error is now dynamic
|
|
||||||
|
|
||||||
@primitive_types %{
|
|
||||||
integer: %{type_kind: :primitive, name: :integer},
|
|
||||||
string: %{type_kind: :primitive, name: :string},
|
|
||||||
atom: %{type_kind: :primitive, name: :atom}, # Added primitive atom
|
|
||||||
number: %{type_kind: :primitive, name: :number},
|
|
||||||
any: %{type_kind: :primitive, name: :any},
|
|
||||||
nothing: %{type_kind: :primitive, name: :nothing}
|
|
||||||
}
|
|
||||||
|
|
||||||
@primitive_type_keys %{
|
|
||||||
integer: @integer_type_key,
|
|
||||||
string: @string_type_key,
|
|
||||||
atom: @atom_type_key, # Added key for primitive atom
|
|
||||||
number: @number_type_key,
|
|
||||||
any: @any_type_key,
|
|
||||||
nothing: @nothing_type_key
|
|
||||||
}
|
|
||||||
|
|
||||||
@literal_types %{
|
|
||||||
nil_atom: %{type_kind: :literal, value: nil},
|
|
||||||
true_atom: %{type_kind: :literal, value: true},
|
|
||||||
false_atom: %{type_kind: :literal, value: false}
|
|
||||||
}
|
|
||||||
|
|
||||||
@literal_type_keys %{
|
|
||||||
nil_atom: @nil_literal_type_key,
|
|
||||||
true_atom: @true_literal_type_key,
|
|
||||||
false_atom: @false_literal_type_key
|
|
||||||
}
|
|
||||||
|
|
||||||
@special_types %{
|
|
||||||
file_marker: %{type_kind: :special, name: :file_marker}
|
|
||||||
}
|
|
||||||
|
|
||||||
@special_type_keys %{
|
|
||||||
file_marker: @file_marker_type_key
|
|
||||||
}
|
|
||||||
|
|
||||||
@error_type_definitions %{
|
|
||||||
missing_value_type_in_assignment: %{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :missing_value_type_in_assignment
|
|
||||||
},
|
|
||||||
not_a_function: %{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :not_a_function
|
|
||||||
# actual_operator_type_id is dynamic
|
|
||||||
},
|
|
||||||
arity_mismatch: %{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :arity_mismatch
|
|
||||||
# expected_arity, actual_arity, function_type_id are dynamic
|
|
||||||
},
|
|
||||||
argument_type_mismatch: %{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :argument_type_mismatch
|
|
||||||
# arg_position, expected_arg_type_id, actual_arg_type_id, function_type_id are dynamic
|
|
||||||
}
|
|
||||||
# :invalid_if_condition_type is dynamically generated with actual_condition_type_id
|
|
||||||
# No static definition here, but we need a key for the reason atom.
|
|
||||||
# The dynamic error interner will use the reason atom.
|
|
||||||
}
|
|
||||||
|
|
||||||
@error_type_keys %{
|
|
||||||
missing_value_type_in_assignment: @error_missing_value_type_in_assignment_key,
|
|
||||||
not_a_function: @error_not_a_function_key,
|
|
||||||
arity_mismatch: @error_arity_mismatch_key,
|
|
||||||
argument_type_mismatch: @error_argument_type_mismatch_key
|
|
||||||
# :type_annotation_mismatch key is removed
|
|
||||||
# :invalid_if_condition_type does not have a static key for a full definition,
|
|
||||||
# but the reason :invalid_if_condition_type is used.
|
|
||||||
}
|
|
||||||
|
|
||||||
def primitive_types, do: @primitive_types
|
|
||||||
def primitive_type_key(name), do: @primitive_type_keys[name]
|
|
||||||
def primitive_type_keys_map, do: @primitive_type_keys
|
|
||||||
|
|
||||||
def literal_types, do: @literal_types
|
|
||||||
def literal_type_key(name), do: @literal_type_keys[name]
|
|
||||||
def literal_type_keys_map, do: @literal_type_keys
|
|
||||||
|
|
||||||
def special_types, do: @special_types
|
|
||||||
def special_type_key(name), do: @special_type_keys[name]
|
|
||||||
def special_type_keys_map, do: @special_type_keys
|
|
||||||
|
|
||||||
def error_type_definitions, do: @error_type_definitions
|
|
||||||
def error_type_key(name), do: @error_type_keys[name]
|
|
||||||
def error_type_keys_map, do: @error_type_keys
|
|
||||||
|
|
||||||
# Accessors for specific type definitions
|
|
||||||
def get_primitive_type(:any), do: @primitive_types[:any]
|
|
||||||
def get_primitive_type(:integer), do: @primitive_types[:integer]
|
|
||||||
def get_primitive_type(:string), do: @primitive_types[:string]
|
|
||||||
def get_primitive_type(:atom), do: @primitive_types[:atom] # Accessor for primitive atom
|
|
||||||
def get_primitive_type(:number), do: @primitive_types[:number]
|
|
||||||
def get_primitive_type(:nothing), do: @primitive_types[:nothing]
|
|
||||||
|
|
||||||
def get_literal_type(:nil_atom), do: @literal_types[:nil_atom]
|
|
||||||
def get_literal_type(:true_atom), do: @literal_types[:true_atom]
|
|
||||||
def get_literal_type(:false_atom), do: @literal_types[:false_atom]
|
|
||||||
|
|
||||||
def get_special_type(:file_marker), do: @special_types[:file_marker]
|
|
||||||
|
|
||||||
def get_error_type_definition(:missing_value_type_in_assignment),
|
|
||||||
do: @error_type_definitions[:missing_value_type_in_assignment]
|
|
||||||
|
|
||||||
# For type_annotation_mismatch, we now expect actual_type_id and expected_type_id
|
|
||||||
def get_error_type_definition(:type_annotation_mismatch, actual_type_id, expected_type_id) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :type_annotation_mismatch,
|
|
||||||
actual_type_id: actual_type_id,
|
|
||||||
expected_type_id: expected_type_id
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_error_type_definition(:invalid_if_condition_type, actual_condition_type_id) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :invalid_if_condition_type,
|
|
||||||
actual_condition_type_id: actual_condition_type_id
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_error_type_definition(:not_a_function, actual_operator_type_id) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :not_a_function,
|
|
||||||
actual_operator_type_id: actual_operator_type_id
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_error_type_definition(
|
|
||||||
:arity_mismatch,
|
|
||||||
expected_arity,
|
|
||||||
actual_arity,
|
|
||||||
function_type_id
|
|
||||||
) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :arity_mismatch,
|
|
||||||
expected_arity: expected_arity,
|
|
||||||
actual_arity: actual_arity,
|
|
||||||
function_type_id: function_type_id
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_error_type_definition(
|
|
||||||
:argument_type_mismatch,
|
|
||||||
arg_position,
|
|
||||||
expected_arg_type_id,
|
|
||||||
actual_arg_type_id,
|
|
||||||
function_type_id
|
|
||||||
) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :argument_type_mismatch,
|
|
||||||
arg_position: arg_position,
|
|
||||||
expected_arg_type_id: expected_arg_type_id,
|
|
||||||
actual_arg_type_id: actual_arg_type_id,
|
|
||||||
function_type_id: function_type_id
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Accessors for specific type keys
|
|
||||||
def any_type_key, do: @any_type_key
|
|
||||||
def nil_literal_type_key, do: @nil_literal_type_key
|
|
||||||
end
|
|
||||||
146
lib/tilly/bdd.ex
146
lib/tilly/bdd.ex
@ -1,146 +0,0 @@
|
|||||||
defmodule Tilly.BDD do
|
|
||||||
@moduledoc """
|
|
||||||
Manages the BDD store, including hash-consing of BDD nodes.
|
|
||||||
The BDD store is expected to be part of a `typing_ctx` map under the key `:bdd_store`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
alias Tilly.BDD.Node
|
|
||||||
|
|
||||||
@false_node_id 0
|
|
||||||
@true_node_id 1
|
|
||||||
@initial_next_node_id 2
|
|
||||||
@universal_ops_module :universal_ops
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Initializes the BDD store within the typing context.
|
|
||||||
Pre-interns canonical `false` and `true` BDD nodes.
|
|
||||||
"""
|
|
||||||
def init_bdd_store(typing_ctx) when is_map(typing_ctx) do
|
|
||||||
false_structure = Node.mk_false()
|
|
||||||
true_structure = Node.mk_true()
|
|
||||||
|
|
||||||
bdd_store = %{
|
|
||||||
nodes_by_structure: %{
|
|
||||||
{false_structure, @universal_ops_module} => @false_node_id,
|
|
||||||
{true_structure, @universal_ops_module} => @true_node_id
|
|
||||||
},
|
|
||||||
structures_by_id: %{
|
|
||||||
@false_node_id => %{structure: false_structure, ops_module: @universal_ops_module},
|
|
||||||
@true_node_id => %{structure: true_structure, ops_module: @universal_ops_module}
|
|
||||||
},
|
|
||||||
next_node_id: @initial_next_node_id,
|
|
||||||
ops_cache: %{} # Cache for BDD operations {op_key, id1, id2} -> result_id
|
|
||||||
}
|
|
||||||
|
|
||||||
Map.put(typing_ctx, :bdd_store, bdd_store)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets an existing BDD node ID or interns a new one if it's not already in the store.
|
|
||||||
|
|
||||||
Returns a tuple `{new_typing_ctx, node_id}`.
|
|
||||||
The `typing_ctx` is updated if a new node is interned.
|
|
||||||
"""
|
|
||||||
def get_or_intern_node(typing_ctx, logical_structure, ops_module_atom) do
|
|
||||||
bdd_store = Map.get(typing_ctx, :bdd_store)
|
|
||||||
|
|
||||||
unless bdd_store do
|
|
||||||
raise ArgumentError, "BDD store not initialized in typing_ctx. Call init_bdd_store first."
|
|
||||||
end
|
|
||||||
|
|
||||||
key = {logical_structure, ops_module_atom}
|
|
||||||
|
|
||||||
case Map.get(bdd_store.nodes_by_structure, key) do
|
|
||||||
nil ->
|
|
||||||
# Node not found, intern it
|
|
||||||
node_id = bdd_store.next_node_id
|
|
||||||
|
|
||||||
new_nodes_by_structure = Map.put(bdd_store.nodes_by_structure, key, node_id)
|
|
||||||
|
|
||||||
node_data = %{structure: logical_structure, ops_module: ops_module_atom}
|
|
||||||
new_structures_by_id = Map.put(bdd_store.structures_by_id, node_id, node_data)
|
|
||||||
|
|
||||||
new_next_node_id = node_id + 1
|
|
||||||
|
|
||||||
new_bdd_store =
|
|
||||||
%{
|
|
||||||
bdd_store
|
|
||||||
| nodes_by_structure: new_nodes_by_structure,
|
|
||||||
structures_by_id: new_structures_by_id,
|
|
||||||
next_node_id: new_next_node_id
|
|
||||||
}
|
|
||||||
|
|
||||||
new_typing_ctx = Map.put(typing_ctx, :bdd_store, new_bdd_store)
|
|
||||||
{new_typing_ctx, node_id}
|
|
||||||
|
|
||||||
existing_node_id ->
|
|
||||||
# Node found
|
|
||||||
{typing_ctx, existing_node_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Retrieves the node's structure and ops_module from the BDD store.
|
|
||||||
Returns `%{structure: logical_structure_tuple, ops_module: ops_module_atom}` or `nil` if not found.
|
|
||||||
"""
|
|
||||||
def get_node_data(typing_ctx, node_id) do
|
|
||||||
with %{bdd_store: %{structures_by_id: structures_by_id}} <- typing_ctx,
|
|
||||||
data when not is_nil(data) <- Map.get(structures_by_id, node_id) do
|
|
||||||
data
|
|
||||||
else
|
|
||||||
_ -> nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if the given node ID corresponds to the canonical `false` BDD node.
|
|
||||||
"""
|
|
||||||
def is_false_node?(typing_ctx, node_id) do
|
|
||||||
# Optimized check for the predefined ID
|
|
||||||
if node_id == @false_node_id do
|
|
||||||
true
|
|
||||||
else
|
|
||||||
# Fallback for cases where a node might be structurally false but not have the canonical ID.
|
|
||||||
# This should ideally not happen with proper interning of Node.mk_false() via get_or_intern_node.
|
|
||||||
case get_node_data(typing_ctx, node_id) do
|
|
||||||
%{structure: structure, ops_module: @universal_ops_module} ->
|
|
||||||
structure == Node.mk_false()
|
|
||||||
_ ->
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if the given node ID corresponds to the canonical `true` BDD node.
|
|
||||||
"""
|
|
||||||
def is_true_node?(typing_ctx, node_id) do
|
|
||||||
# Optimized check for the predefined ID
|
|
||||||
if node_id == @true_node_id do
|
|
||||||
true
|
|
||||||
else
|
|
||||||
# Fallback for cases where a node might be structurally true but not have the canonical ID.
|
|
||||||
case get_node_data(typing_ctx, node_id) do
|
|
||||||
%{structure: structure, ops_module: @universal_ops_module} ->
|
|
||||||
structure == Node.mk_true()
|
|
||||||
_ ->
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the canonical ID for the `false` BDD node.
|
|
||||||
"""
|
|
||||||
def false_node_id(), do: @false_node_id
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the canonical ID for the `true` BDD node.
|
|
||||||
"""
|
|
||||||
def true_node_id(), do: @true_node_id
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the atom used as the `ops_module` for universal nodes like `true` and `false`.
|
|
||||||
"""
|
|
||||||
def universal_ops_module(), do: @universal_ops_module
|
|
||||||
end
|
|
||||||
@ -1,89 +0,0 @@
|
|||||||
defmodule Tilly.BDD.AtomBoolOps do
|
|
||||||
@moduledoc """
|
|
||||||
BDD operations module for sets of atoms.
|
|
||||||
Elements are atoms, and leaf values are booleans.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Compares two atoms.
|
|
||||||
Returns `:lt`, `:eq`, or `:gt`.
|
|
||||||
"""
|
|
||||||
def compare_elements(elem1, elem2) when is_atom(elem1) and is_atom(elem2) do
|
|
||||||
cond do
|
|
||||||
elem1 < elem2 -> :lt
|
|
||||||
elem1 > elem2 -> :gt
|
|
||||||
true -> :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if two atoms are equal.
|
|
||||||
"""
|
|
||||||
def equal_element?(elem1, elem2) when is_atom(elem1) and is_atom(elem2) do
|
|
||||||
elem1 == elem2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Hashes an atom.
|
|
||||||
"""
|
|
||||||
def hash_element(elem) when is_atom(elem) do
|
|
||||||
# erlang.phash2 is suitable for term hashing
|
|
||||||
:erlang.phash2(elem)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
The leaf value representing an empty set of atoms (false).
|
|
||||||
"""
|
|
||||||
def empty_leaf(), do: false
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
The leaf value representing the universal set of atoms (true).
|
|
||||||
This is used if a BDD simplifies to a state where all atoms of this kind are included.
|
|
||||||
"""
|
|
||||||
def any_leaf(), do: true
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if a leaf value represents an empty set.
|
|
||||||
"""
|
|
||||||
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
|
|
||||||
leaf_val == false
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the union of two leaf values.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 or leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the intersection of two leaf values.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def intersection_leaves(_typing_ctx, leaf1, leaf2)
|
|
||||||
when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 and leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the negation of a leaf value.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
|
|
||||||
not leaf
|
|
||||||
end
|
|
||||||
|
|
||||||
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
# leaf1 and (not leaf2)
|
|
||||||
# end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Tests a leaf value to determine if it represents an empty, full, or other set.
|
|
||||||
Returns `:empty`, `:full`, or `:other`.
|
|
||||||
"""
|
|
||||||
def test_leaf_value(true), do: :full
|
|
||||||
def test_leaf_value(false), do: :empty
|
|
||||||
# Add a clause for other types if atoms could have non-boolean leaf values
|
|
||||||
# def test_leaf_value(_other), do: :other
|
|
||||||
end
|
|
||||||
@ -1,87 +0,0 @@
|
|||||||
defmodule Tilly.BDD.IntegerBoolOps do
|
|
||||||
@moduledoc """
|
|
||||||
BDD Operations module for BDDs where elements are integers and leaves are booleans.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Compares two integer elements.
|
|
||||||
Returns `:lt`, `:eq`, or `:gt`.
|
|
||||||
"""
|
|
||||||
def compare_elements(elem1, elem2) when is_integer(elem1) and is_integer(elem2) do
|
|
||||||
cond do
|
|
||||||
elem1 < elem2 -> :lt
|
|
||||||
elem1 > elem2 -> :gt
|
|
||||||
true -> :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if two integer elements are equal.
|
|
||||||
"""
|
|
||||||
def equal_element?(elem1, elem2) when is_integer(elem1) and is_integer(elem2) do
|
|
||||||
elem1 == elem2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Hashes an integer element.
|
|
||||||
"""
|
|
||||||
def hash_element(elem) when is_integer(elem) do
|
|
||||||
elem
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the leaf value representing emptiness (false).
|
|
||||||
"""
|
|
||||||
def empty_leaf(), do: false
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the leaf value representing universality (true).
|
|
||||||
"""
|
|
||||||
def any_leaf(), do: true
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if the leaf value represents emptiness.
|
|
||||||
"""
|
|
||||||
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
|
|
||||||
leaf_val == false
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the union of two boolean leaf values.
|
|
||||||
The `_typing_ctx` is ignored for this simple ops module.
|
|
||||||
"""
|
|
||||||
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 or leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the intersection of two boolean leaf values.
|
|
||||||
The `_typing_ctx` is ignored for this simple ops module.
|
|
||||||
"""
|
|
||||||
def intersection_leaves(_typing_ctx, leaf1, leaf2)
|
|
||||||
when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 and leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the negation of a boolean leaf value.
|
|
||||||
The `_typing_ctx` is ignored for this simple ops module.
|
|
||||||
"""
|
|
||||||
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
|
|
||||||
not leaf
|
|
||||||
end
|
|
||||||
|
|
||||||
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
# leaf1 and (not leaf2)
|
|
||||||
# end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Tests a leaf value to determine if it represents an empty, full, or other set.
|
|
||||||
For boolean leaves with integers, this mirrors AtomBoolOps and StringBoolOps.
|
|
||||||
Returns `:empty`, `:full`, or `:other`.
|
|
||||||
"""
|
|
||||||
def test_leaf_value(true), do: :full
|
|
||||||
def test_leaf_value(false), do: :empty
|
|
||||||
# If integer BDDs could have non-boolean leaves that are not empty/full:
|
|
||||||
# def test_leaf_value(_other_leaf_value), do: :other
|
|
||||||
end
|
|
||||||
@ -1,124 +0,0 @@
|
|||||||
defmodule Tilly.BDD.Node do
|
|
||||||
@moduledoc """
|
|
||||||
Defines the structure of BDD nodes and provides basic helper functions.
|
|
||||||
|
|
||||||
BDD nodes can be one of the following Elixir terms:
|
|
||||||
- `true`: Represents the universal set BDD.
|
|
||||||
- `false`: Represents the empty set BDD.
|
|
||||||
- `{:leaf, leaf_value_id}`: Represents a leaf node.
|
|
||||||
`leaf_value_id`'s interpretation depends on the specific BDD's `ops_module`.
|
|
||||||
- `{:split, element_id, positive_child_id, ignore_child_id, negative_child_id}`:
|
|
||||||
Represents an internal decision node.
|
|
||||||
`element_id` is the value being split upon.
|
|
||||||
`positive_child_id`, `ignore_child_id`, `negative_child_id` are IDs of other BDD nodes.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@typedoc "A BDD node representing the universal set."
|
|
||||||
@type true_node :: true
|
|
||||||
|
|
||||||
@typedoc "A BDD node representing the empty set."
|
|
||||||
@type false_node :: false
|
|
||||||
|
|
||||||
@typedoc "A BDD leaf node."
|
|
||||||
@type leaf_node(leaf_value) :: {:leaf, leaf_value}
|
|
||||||
|
|
||||||
@typedoc "A BDD split node."
|
|
||||||
@type split_node(element, node_id) ::
|
|
||||||
{:split, element, node_id, node_id, node_id}
|
|
||||||
|
|
||||||
@typedoc "Any valid BDD node structure."
|
|
||||||
@type t(element, leaf_value, node_id) ::
|
|
||||||
true_node()
|
|
||||||
| false_node()
|
|
||||||
| leaf_node(leaf_value)
|
|
||||||
| split_node(element, node_id)
|
|
||||||
|
|
||||||
# --- Smart Constructors (Low-Level) ---
|
|
||||||
|
|
||||||
@doc "Creates a true BDD node."
|
|
||||||
@spec mk_true() :: true_node()
|
|
||||||
def mk_true, do: true
|
|
||||||
|
|
||||||
@doc "Creates a false BDD node."
|
|
||||||
@spec mk_false() :: false_node()
|
|
||||||
def mk_false, do: false
|
|
||||||
|
|
||||||
@doc "Creates a leaf BDD node."
|
|
||||||
@spec mk_leaf(leaf_value :: any()) :: leaf_node(any())
|
|
||||||
def mk_leaf(leaf_value_id), do: {:leaf, leaf_value_id}
|
|
||||||
|
|
||||||
@doc "Creates a split BDD node."
|
|
||||||
@spec mk_split(
|
|
||||||
element_id :: any(),
|
|
||||||
positive_child_id :: any(),
|
|
||||||
ignore_child_id :: any(),
|
|
||||||
negative_child_id :: any()
|
|
||||||
) :: split_node(any(), any())
|
|
||||||
def mk_split(element_id, positive_child_id, ignore_child_id, negative_child_id) do
|
|
||||||
{:split, element_id, positive_child_id, ignore_child_id, negative_child_id}
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Predicates ---
|
|
||||||
|
|
||||||
@doc "Checks if the node is a true node."
|
|
||||||
@spec is_true?(node :: t(any(), any(), any())) :: boolean()
|
|
||||||
def is_true?(true), do: true
|
|
||||||
def is_true?(_other), do: false
|
|
||||||
|
|
||||||
@doc "Checks if the node is a false node."
|
|
||||||
@spec is_false?(node :: t(any(), any(), any())) :: boolean()
|
|
||||||
def is_false?(false), do: true
|
|
||||||
def is_false?(_other), do: false
|
|
||||||
|
|
||||||
@doc "Checks if the node is a leaf node."
|
|
||||||
@spec is_leaf?(node :: t(any(), any(), any())) :: boolean()
|
|
||||||
def is_leaf?({:leaf, _value}), do: true
|
|
||||||
def is_leaf?(_other), do: false
|
|
||||||
|
|
||||||
@doc "Checks if the node is a split node."
|
|
||||||
@spec is_split?(node :: t(any(), any(), any())) :: boolean()
|
|
||||||
def is_split?({:split, _el, _p, _i, _n}), do: true
|
|
||||||
def is_split?(_other), do: false
|
|
||||||
|
|
||||||
# --- Accessors ---
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the value of a leaf node.
|
|
||||||
Raises an error if the node is not a leaf node.
|
|
||||||
"""
|
|
||||||
@spec value(leaf_node :: leaf_node(any())) :: any()
|
|
||||||
def value({:leaf, value_id}), do: value_id
|
|
||||||
def value(other), do: raise(ArgumentError, "Not a leaf node: #{inspect(other)}")
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the element of a split node.
|
|
||||||
Raises an error if the node is not a split node.
|
|
||||||
"""
|
|
||||||
@spec element(split_node :: split_node(any(), any())) :: any()
|
|
||||||
def element({:split, element_id, _, _, _}), do: element_id
|
|
||||||
def element(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the positive child ID of a split node.
|
|
||||||
Raises an error if the node is not a split node.
|
|
||||||
"""
|
|
||||||
@spec positive_child(split_node :: split_node(any(), any())) :: any()
|
|
||||||
def positive_child({:split, _, p_child_id, _, _}), do: p_child_id
|
|
||||||
def positive_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the ignore child ID of a split node.
|
|
||||||
Raises an error if the node is not a split node.
|
|
||||||
"""
|
|
||||||
@spec ignore_child(split_node :: split_node(any(), any())) :: any()
|
|
||||||
def ignore_child({:split, _, _, i_child_id, _}), do: i_child_id
|
|
||||||
def ignore_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns the negative child ID of a split node.
|
|
||||||
Raises an error if the node is not a split node.
|
|
||||||
"""
|
|
||||||
@spec negative_child(split_node :: split_node(any(), any())) :: any()
|
|
||||||
def negative_child({:split, _, _, _, n_child_id}), do: n_child_id
|
|
||||||
def negative_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
|
|
||||||
end
|
|
||||||
@ -1,347 +0,0 @@
|
|||||||
defmodule Tilly.BDD.Ops do
|
|
||||||
@moduledoc """
|
|
||||||
Generic BDD algorithms and smart constructors.
|
|
||||||
These functions operate on BDD node IDs and use an `ops_module`
|
|
||||||
to dispatch to specific element/leaf operations.
|
|
||||||
"""
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.BDD.Node
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Smart constructor for leaf nodes.
|
|
||||||
Uses the `ops_module` to test if the `leaf_value` corresponds to
|
|
||||||
an empty or universal set for that module.
|
|
||||||
Returns `{new_typing_ctx, node_id}`.
|
|
||||||
"""
|
|
||||||
def leaf(typing_ctx, leaf_value, ops_module) do
|
|
||||||
case apply(ops_module, :test_leaf_value, [leaf_value]) do
|
|
||||||
:empty ->
|
|
||||||
{typing_ctx, BDD.false_node_id()}
|
|
||||||
|
|
||||||
:full ->
|
|
||||||
{typing_ctx, BDD.true_node_id()}
|
|
||||||
|
|
||||||
:other ->
|
|
||||||
logical_structure = Node.mk_leaf(leaf_value)
|
|
||||||
BDD.get_or_intern_node(typing_ctx, logical_structure, ops_module)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Smart constructor for split nodes. Applies simplification rules.
|
|
||||||
Returns `{new_typing_ctx, node_id}`.
|
|
||||||
"""
|
|
||||||
def split(typing_ctx, element, p_id, i_id, n_id, ops_module) do
|
|
||||||
# Apply simplification rules. Order can be important.
|
|
||||||
cond do
|
|
||||||
# If ignore and negative children are False, result is positive child.
|
|
||||||
BDD.is_false_node?(typing_ctx, i_id) and
|
|
||||||
BDD.is_false_node?(typing_ctx, n_id) ->
|
|
||||||
{typing_ctx, p_id}
|
|
||||||
|
|
||||||
# If ignore child is True, the whole BDD is True.
|
|
||||||
BDD.is_true_node?(typing_ctx, i_id) ->
|
|
||||||
{typing_ctx, BDD.true_node_id()}
|
|
||||||
|
|
||||||
# If positive and negative children are the same.
|
|
||||||
p_id == n_id ->
|
|
||||||
if p_id == i_id do
|
|
||||||
# All three children are identical.
|
|
||||||
{typing_ctx, p_id}
|
|
||||||
else
|
|
||||||
# Result is p_id (or n_id) unioned with i_id.
|
|
||||||
# This creates a potential mutual recursion with union_bdds
|
|
||||||
# which needs to be handled by the apply_op cache.
|
|
||||||
union_bdds(typing_ctx, p_id, i_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
# TODO: Add more simplification rules from CDuce bdd.ml `split` as needed.
|
|
||||||
# e.g. if p=T, i=F, n=T -> True
|
|
||||||
# e.g. if p=F, i=F, n=T -> not(x) relative to this BDD's element universe (complex)
|
|
||||||
|
|
||||||
true ->
|
|
||||||
# No further simplification rule applied, intern the node.
|
|
||||||
logical_structure = Node.mk_split(element, p_id, i_id, n_id)
|
|
||||||
BDD.get_or_intern_node(typing_ctx, logical_structure, ops_module)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the union of two BDDs.
|
|
||||||
Returns `{new_typing_ctx, result_node_id}`.
|
|
||||||
"""
|
|
||||||
def union_bdds(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
apply_op(typing_ctx, :union, bdd1_id, bdd2_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the intersection of two BDDs.
|
|
||||||
Returns `{new_typing_ctx, result_node_id}`.
|
|
||||||
"""
|
|
||||||
def intersection_bdds(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
apply_op(typing_ctx, :intersection, bdd1_id, bdd2_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the negation of a BDD.
|
|
||||||
Returns `{new_typing_ctx, result_node_id}`.
|
|
||||||
"""
|
|
||||||
def negation_bdd(typing_ctx, bdd_id) do
|
|
||||||
# The second argument to apply_op is nil for unary operations like negation.
|
|
||||||
apply_op(typing_ctx, :negation, bdd_id, nil)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the difference of two BDDs (bdd1 - bdd2).
|
|
||||||
Returns `{new_typing_ctx, result_node_id}`.
|
|
||||||
Implemented as `bdd1 INTERSECTION (NEGATION bdd2)`.
|
|
||||||
"""
|
|
||||||
def difference_bdd(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
{ctx, neg_bdd2_id} = negation_bdd(typing_ctx, bdd2_id)
|
|
||||||
intersection_bdds(ctx, bdd1_id, neg_bdd2_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Internal function to handle actual BDD operations, bypassing cache for direct calls.
|
|
||||||
defp do_union_bdds(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
# Ensure canonical order for commutative operations if not handled by apply_op key
|
|
||||||
# For simplicity, apply_op will handle canonical key generation.
|
|
||||||
|
|
||||||
# 1. Handle terminal cases
|
|
||||||
cond do
|
|
||||||
bdd1_id == bdd2_id -> {typing_ctx, bdd1_id}
|
|
||||||
BDD.is_true_node?(typing_ctx, bdd1_id) -> {typing_ctx, BDD.true_node_id()}
|
|
||||||
BDD.is_true_node?(typing_ctx, bdd2_id) -> {typing_ctx, BDD.true_node_id()}
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd1_id) -> {typing_ctx, bdd2_id}
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd2_id) -> {typing_ctx, bdd1_id}
|
|
||||||
true -> perform_union(typing_ctx, bdd1_id, bdd2_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp perform_union(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
%{structure: s1, ops_module: ops_m1} = BDD.get_node_data(typing_ctx, bdd1_id)
|
|
||||||
%{structure: s2, ops_module: ops_m2} = BDD.get_node_data(typing_ctx, bdd2_id)
|
|
||||||
|
|
||||||
# For now, assume ops_modules must match for simplicity.
|
|
||||||
# Production systems might need more complex logic or type errors here.
|
|
||||||
if ops_m1 != ops_m2 do
|
|
||||||
raise ArgumentError,
|
|
||||||
"Cannot union BDDs with different ops_modules: #{inspect(ops_m1)} and #{inspect(ops_m2)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
ops_m = ops_m1
|
|
||||||
|
|
||||||
case {s1, s2} do
|
|
||||||
# Both are leaves
|
|
||||||
{{:leaf, v1}, {:leaf, v2}} ->
|
|
||||||
new_leaf_val = apply(ops_m, :union_leaves, [typing_ctx, v1, v2])
|
|
||||||
leaf(typing_ctx, new_leaf_val, ops_m)
|
|
||||||
|
|
||||||
# s1 is split, s2 is leaf
|
|
||||||
{{:split, x1, p1_id, i1_id, n1_id}, {:leaf, _v2}} ->
|
|
||||||
# CDuce: split x1 p1 (i1 ++ b) n1
|
|
||||||
{ctx, new_i1_id} = union_bdds(typing_ctx, i1_id, bdd2_id)
|
|
||||||
split(ctx, x1, p1_id, new_i1_id, n1_id, ops_m)
|
|
||||||
|
|
||||||
# s1 is leaf, s2 is split
|
|
||||||
{{:leaf, _v1}, {:split, x2, p2_id, i2_id, n2_id}} ->
|
|
||||||
# CDuce: split x2 p2 (i2 ++ a) n2 (symmetric to above)
|
|
||||||
{ctx, new_i2_id} = union_bdds(typing_ctx, i2_id, bdd1_id)
|
|
||||||
split(ctx, x2, p2_id, new_i2_id, n2_id, ops_m)
|
|
||||||
|
|
||||||
# Both are splits
|
|
||||||
{{:split, x1, p1_id, i1_id, n1_id}, {:split, x2, p2_id, i2_id, n2_id}} ->
|
|
||||||
# Compare elements using the ops_module
|
|
||||||
comp_result = apply(ops_m, :compare_elements, [x1, x2])
|
|
||||||
|
|
||||||
cond do
|
|
||||||
comp_result == :eq ->
|
|
||||||
# Elements are equal, merge children
|
|
||||||
{ctx0, new_p_id} = union_bdds(typing_ctx, p1_id, p2_id)
|
|
||||||
{ctx1, new_i_id} = union_bdds(ctx0, i1_id, i2_id)
|
|
||||||
{ctx2, new_n_id} = union_bdds(ctx1, n1_id, n2_id)
|
|
||||||
split(ctx2, x1, new_p_id, new_i_id, new_n_id, ops_m)
|
|
||||||
|
|
||||||
comp_result == :lt ->
|
|
||||||
# x1 < x2
|
|
||||||
# CDuce: split x1 p1 (i1 ++ b) n1
|
|
||||||
{ctx, new_i1_id} = union_bdds(typing_ctx, i1_id, bdd2_id)
|
|
||||||
split(ctx, x1, p1_id, new_i1_id, n1_id, ops_m)
|
|
||||||
|
|
||||||
comp_result == :gt ->
|
|
||||||
# x1 > x2
|
|
||||||
# CDuce: split x2 p2 (i2 ++ a) n2
|
|
||||||
{ctx, new_i2_id} = union_bdds(typing_ctx, i2_id, bdd1_id)
|
|
||||||
split(ctx, x2, p2_id, new_i2_id, n2_id, ops_m)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_intersection_bdds(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
# Canonical order handled by apply_op key generation.
|
|
||||||
|
|
||||||
# Fast path for disjoint singleton BDDs
|
|
||||||
case {BDD.get_node_data(typing_ctx, bdd1_id), BDD.get_node_data(typing_ctx, bdd2_id)} do
|
|
||||||
{%{structure: {:split, x1, t, f, f}, ops_module: m},
|
|
||||||
%{structure: {:split, x2, t, f, f}, ops_module: m}}
|
|
||||||
when x1 != x2 ->
|
|
||||||
{typing_ctx, BDD.false_node_id()}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# 1. Handle terminal cases
|
|
||||||
cond do
|
|
||||||
bdd1_id == bdd2_id -> {typing_ctx, bdd1_id}
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd1_id) -> {typing_ctx, BDD.false_node_id()}
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd2_id) -> {typing_ctx, BDD.false_node_id()}
|
|
||||||
BDD.is_true_node?(typing_ctx, bdd1_id) -> {typing_ctx, bdd2_id}
|
|
||||||
BDD.is_true_node?(typing_ctx, bdd2_id) -> {typing_ctx, bdd1_id}
|
|
||||||
true -> perform_intersection(typing_ctx, bdd1_id, bdd2_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp perform_intersection(typing_ctx, bdd1_id, bdd2_id) do
|
|
||||||
%{structure: s1, ops_module: ops_m1} = BDD.get_node_data(typing_ctx, bdd1_id)
|
|
||||||
%{structure: s2, ops_module: ops_m2} = BDD.get_node_data(typing_ctx, bdd2_id)
|
|
||||||
|
|
||||||
if ops_m1 != ops_m2 do
|
|
||||||
raise ArgumentError,
|
|
||||||
"Cannot intersect BDDs with different ops_modules: #{inspect(ops_m1)} and #{inspect(ops_m2)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
ops_m = ops_m1
|
|
||||||
|
|
||||||
case {s1, s2} do
|
|
||||||
# Both are leaves
|
|
||||||
{{:leaf, v1}, {:leaf, v2}} ->
|
|
||||||
new_leaf_val = apply(ops_m, :intersection_leaves, [typing_ctx, v1, v2])
|
|
||||||
leaf(typing_ctx, new_leaf_val, ops_m)
|
|
||||||
|
|
||||||
# s1 is split, s2 is leaf
|
|
||||||
{{:split, x1, p1_id, i1_id, n1_id}, {:leaf, _v2}} ->
|
|
||||||
{ctx0, new_p1_id} = intersection_bdds(typing_ctx, p1_id, bdd2_id)
|
|
||||||
{ctx1, new_i1_id} = intersection_bdds(ctx0, i1_id, bdd2_id)
|
|
||||||
{ctx2, new_n1_id} = intersection_bdds(ctx1, n1_id, bdd2_id)
|
|
||||||
split(ctx2, x1, new_p1_id, new_i1_id, new_n1_id, ops_m)
|
|
||||||
|
|
||||||
# s1 is leaf, s2 is split
|
|
||||||
{{:leaf, _v1}, {:split, x2, p2_id, i2_id, n2_id}} ->
|
|
||||||
{ctx0, new_p2_id} = intersection_bdds(typing_ctx, bdd1_id, p2_id)
|
|
||||||
{ctx1, new_i2_id} = intersection_bdds(ctx0, bdd1_id, i2_id)
|
|
||||||
{ctx2, new_n2_id} = intersection_bdds(ctx1, bdd1_id, n2_id)
|
|
||||||
split(ctx2, x2, new_p2_id, new_i2_id, new_n2_id, ops_m)
|
|
||||||
|
|
||||||
# Both are splits
|
|
||||||
{{:split, x1, p1_id, i1_id, n1_id}, {:split, x2, p2_id, i2_id, n2_id}} ->
|
|
||||||
comp_result = apply(ops_m, :compare_elements, [x1, x2])
|
|
||||||
|
|
||||||
cond do
|
|
||||||
comp_result == :eq ->
|
|
||||||
# CDuce: split x1 ((p1**(p2++i2))++(p2**i1)) (i1**i2) ((n1**(n2++i2))++(n2**i1))
|
|
||||||
{ctx0, p2_u_i2} = union_bdds(typing_ctx, p2_id, i2_id)
|
|
||||||
{ctx1, n2_u_i2} = union_bdds(ctx0, n2_id, i2_id)
|
|
||||||
|
|
||||||
{ctx2, p1_i_p2ui2} = intersection_bdds(ctx1, p1_id, p2_u_i2)
|
|
||||||
{ctx3, p2_i_i1} = intersection_bdds(ctx2, p2_id, i1_id)
|
|
||||||
{ctx4, new_p_id} = union_bdds(ctx3, p1_i_p2ui2, p2_i_i1)
|
|
||||||
|
|
||||||
{ctx5, new_i_id} = intersection_bdds(ctx4, i1_id, i2_id)
|
|
||||||
|
|
||||||
{ctx6, n1_i_n2ui2} = intersection_bdds(ctx5, n1_id, n2_u_i2)
|
|
||||||
{ctx7, n2_i_i1} = intersection_bdds(ctx6, n2_id, i1_id)
|
|
||||||
{ctx8, new_n_id} = union_bdds(ctx7, n1_i_n2ui2, n2_i_i1)
|
|
||||||
|
|
||||||
split(ctx8, x1, new_p_id, new_i_id, new_n_id, ops_m)
|
|
||||||
|
|
||||||
# x1 < x2
|
|
||||||
comp_result == :lt ->
|
|
||||||
# CDuce: split x1 (p1 ** b) (i1 ** b) (n1 ** b) where b is bdd2
|
|
||||||
{ctx0, new_p1_id} = intersection_bdds(typing_ctx, p1_id, bdd2_id)
|
|
||||||
{ctx1, new_i1_id} = intersection_bdds(ctx0, i1_id, bdd2_id)
|
|
||||||
{ctx2, new_n1_id} = intersection_bdds(ctx1, n1_id, bdd2_id)
|
|
||||||
split(ctx2, x1, new_p1_id, new_i1_id, new_n1_id, ops_m)
|
|
||||||
|
|
||||||
# x1 > x2
|
|
||||||
comp_result == :gt ->
|
|
||||||
# CDuce: split x2 (a ** p2) (a ** i2) (a ** n2) where a is bdd1
|
|
||||||
{ctx0, new_p2_id} = intersection_bdds(typing_ctx, bdd1_id, p2_id)
|
|
||||||
{ctx1, new_i2_id} = intersection_bdds(ctx0, bdd1_id, i2_id)
|
|
||||||
{ctx2, new_n2_id} = intersection_bdds(ctx1, bdd1_id, n2_id)
|
|
||||||
split(ctx2, x2, new_p2_id, new_i2_id, new_n2_id, ops_m)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_negation_bdd(typing_ctx, bdd_id) do
|
|
||||||
# 1. Handle terminal cases
|
|
||||||
cond do
|
|
||||||
BDD.is_true_node?(typing_ctx, bdd_id) -> {typing_ctx, BDD.false_node_id()}
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd_id) -> {typing_ctx, BDD.true_node_id()}
|
|
||||||
true -> perform_negation(typing_ctx, bdd_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp perform_negation(typing_ctx, bdd_id) do
|
|
||||||
%{structure: s, ops_module: ops_m} = BDD.get_node_data(typing_ctx, bdd_id)
|
|
||||||
|
|
||||||
case s do
|
|
||||||
# Leaf
|
|
||||||
{:leaf, v} ->
|
|
||||||
neg_leaf_val = apply(ops_m, :negation_leaf, [typing_ctx, v])
|
|
||||||
leaf(typing_ctx, neg_leaf_val, ops_m)
|
|
||||||
|
|
||||||
# Split
|
|
||||||
{:split, x, p_id, i_id, n_id} ->
|
|
||||||
# CDuce: ~~i ** split x (~~p) (~~(p++n)) (~~n)
|
|
||||||
{ctx0, neg_i_id} = negation_bdd(typing_ctx, i_id)
|
|
||||||
{ctx1, neg_p_id} = negation_bdd(ctx0, p_id)
|
|
||||||
{ctx2, p_u_n_id} = union_bdds(ctx1, p_id, n_id)
|
|
||||||
{ctx3, neg_p_u_n_id} = negation_bdd(ctx2, p_u_n_id)
|
|
||||||
{ctx4, neg_n_id} = negation_bdd(ctx3, n_id)
|
|
||||||
{ctx5, split_part_id} = split(ctx4, x, neg_p_id, neg_p_u_n_id, neg_n_id, ops_m)
|
|
||||||
intersection_bdds(ctx5, neg_i_id, split_part_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Caching Wrapper for BDD Operations ---
|
|
||||||
defp apply_op(typing_ctx, op_key, bdd1_id, bdd2_id) do
|
|
||||||
cache_key = make_cache_key(op_key, bdd1_id, bdd2_id)
|
|
||||||
bdd_store = Map.get(typing_ctx, :bdd_store)
|
|
||||||
|
|
||||||
case Map.get(bdd_store.ops_cache, cache_key) do
|
|
||||||
nil ->
|
|
||||||
# Not in cache, compute it
|
|
||||||
{new_typing_ctx, result_id} =
|
|
||||||
case op_key do
|
|
||||||
:union -> do_union_bdds(typing_ctx, bdd1_id, bdd2_id)
|
|
||||||
:intersection -> do_intersection_bdds(typing_ctx, bdd1_id, bdd2_id)
|
|
||||||
# bdd2_id is nil here
|
|
||||||
:negation -> do_negation_bdd(typing_ctx, bdd1_id)
|
|
||||||
_ -> raise "Unsupported op_key: #{op_key}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Store in cache
|
|
||||||
# IMPORTANT: Use new_typing_ctx (from the operation) to get the potentially updated bdd_store
|
|
||||||
current_bdd_store_after_op = Map.get(new_typing_ctx, :bdd_store)
|
|
||||||
new_ops_cache = Map.put(current_bdd_store_after_op.ops_cache, cache_key, result_id)
|
|
||||||
final_bdd_store_with_cache = %{current_bdd_store_after_op | ops_cache: new_ops_cache}
|
|
||||||
# And put this updated bdd_store back into new_typing_ctx
|
|
||||||
final_typing_ctx_with_cache =
|
|
||||||
Map.put(new_typing_ctx, :bdd_store, final_bdd_store_with_cache)
|
|
||||||
|
|
||||||
{final_typing_ctx_with_cache, result_id}
|
|
||||||
|
|
||||||
cached_result_id ->
|
|
||||||
{typing_ctx, cached_result_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp make_cache_key(:negation, bdd_id, nil), do: {:negation, bdd_id}
|
|
||||||
|
|
||||||
defp make_cache_key(op_key, id1, id2) when op_key in [:union, :intersection] do
|
|
||||||
# Canonical order for commutative binary operations
|
|
||||||
if id1 <= id2, do: {op_key, id1, id2}, else: {op_key, id2, id1}
|
|
||||||
end
|
|
||||||
|
|
||||||
defp make_cache_key(op_key, id1, id2), do: {op_key, id1, id2}
|
|
||||||
end
|
|
||||||
@ -1,87 +0,0 @@
|
|||||||
defmodule Tilly.BDD.StringBoolOps do
|
|
||||||
@moduledoc """
|
|
||||||
BDD operations module for sets of strings.
|
|
||||||
Elements are strings, and leaf values are booleans.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Compares two strings.
|
|
||||||
Returns `:lt`, `:eq`, or `:gt`.
|
|
||||||
"""
|
|
||||||
def compare_elements(elem1, elem2) when is_binary(elem1) and is_binary(elem2) do
|
|
||||||
cond do
|
|
||||||
elem1 < elem2 -> :lt
|
|
||||||
elem1 > elem2 -> :gt
|
|
||||||
true -> :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if two strings are equal.
|
|
||||||
"""
|
|
||||||
def equal_element?(elem1, elem2) when is_binary(elem1) and is_binary(elem2) do
|
|
||||||
elem1 == elem2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Hashes a string.
|
|
||||||
"""
|
|
||||||
def hash_element(elem) when is_binary(elem) do
|
|
||||||
# erlang.phash2 is suitable for term hashing
|
|
||||||
:erlang.phash2(elem)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
The leaf value representing an empty set of strings (false).
|
|
||||||
"""
|
|
||||||
def empty_leaf(), do: false
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
The leaf value representing the universal set of strings (true).
|
|
||||||
"""
|
|
||||||
def any_leaf(), do: true
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Checks if a leaf value represents an empty set.
|
|
||||||
"""
|
|
||||||
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
|
|
||||||
leaf_val == false
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the union of two leaf values.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 or leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the intersection of two leaf values.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def intersection_leaves(_typing_ctx, leaf1, leaf2)
|
|
||||||
when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
leaf1 and leaf2
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the negation of a leaf value.
|
|
||||||
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
|
|
||||||
"""
|
|
||||||
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
|
|
||||||
not leaf
|
|
||||||
end
|
|
||||||
|
|
||||||
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
|
|
||||||
# leaf1 and (not leaf2)
|
|
||||||
# end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Tests a leaf value to determine if it represents an empty, full, or other set.
|
|
||||||
Returns `:empty`, `:full`, or `:other`.
|
|
||||||
"""
|
|
||||||
def test_leaf_value(true), do: :full
|
|
||||||
def test_leaf_value(false), do: :empty
|
|
||||||
# def test_leaf_value(_other), do: :other
|
|
||||||
end
|
|
||||||
@ -1,57 +0,0 @@
|
|||||||
defmodule Tilly.Type do
|
|
||||||
@moduledoc """
|
|
||||||
Defines the structure of a Type Descriptor (`Descr`) and provides
|
|
||||||
helper functions for creating fundamental type descriptors.
|
|
||||||
|
|
||||||
A Type Descriptor is a map representing a type. Each field in the map
|
|
||||||
corresponds to a basic kind of type component (e.g., atoms, integers, pairs)
|
|
||||||
and holds a BDD node ID. These BDDs represent the set of values
|
|
||||||
allowed for that particular component of the type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns a `Descr` map representing the empty type (Nothing).
|
|
||||||
All BDD IDs in this `Descr` point to the canonical `false` BDD node.
|
|
||||||
The `typing_ctx` is passed for consistency but not modified by this function.
|
|
||||||
"""
|
|
||||||
def empty_descr(_typing_ctx) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
|
|
||||||
%{
|
|
||||||
atoms_bdd_id: false_id,
|
|
||||||
integers_bdd_id: false_id,
|
|
||||||
strings_bdd_id: false_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
# Add other kinds as needed, e.g., for abstract types
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Returns a `Descr` map representing the universal type (Any).
|
|
||||||
All BDD IDs in this `Descr` point to the canonical `true` BDD node.
|
|
||||||
The `typing_ctx` is passed for consistency but not modified by this function.
|
|
||||||
"""
|
|
||||||
def any_descr(_typing_ctx) do
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
%{
|
|
||||||
atoms_bdd_id: true_id,
|
|
||||||
integers_bdd_id: true_id,
|
|
||||||
strings_bdd_id: true_id,
|
|
||||||
pairs_bdd_id: true_id,
|
|
||||||
records_bdd_id: true_id,
|
|
||||||
functions_bdd_id: true_id,
|
|
||||||
# For 'Any', absence is typically not included unless explicitly modeled.
|
|
||||||
# If 'Any' should include the possibility of absence, this would be true_id.
|
|
||||||
# For now, let's assume 'Any' means any *value*, so absence is false.
|
|
||||||
# This can be refined based on the desired semantics of 'Any'.
|
|
||||||
# CDuce 'Any' does not include 'Absent'.
|
|
||||||
absent_marker_bdd_id: BDD.false_node_id()
|
|
||||||
}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,305 +0,0 @@
|
|||||||
defmodule Tilly.Type.Ops do
|
|
||||||
@moduledoc """
|
|
||||||
Implements set-theoretic operations on Type Descriptors (`Descr` maps)
|
|
||||||
and provides helper functions for constructing specific types.
|
|
||||||
Operations work with interned `Descr` IDs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.Type
|
|
||||||
alias Tilly.Type.Store
|
|
||||||
|
|
||||||
# Defines the fields in a Descr map that hold BDD IDs.
|
|
||||||
# Order can be relevant if specific iteration order is ever needed, but for field-wise ops it's not.
|
|
||||||
defp descr_fields do
|
|
||||||
[
|
|
||||||
:atoms_bdd_id,
|
|
||||||
:integers_bdd_id,
|
|
||||||
:strings_bdd_id,
|
|
||||||
:pairs_bdd_id,
|
|
||||||
:records_bdd_id,
|
|
||||||
:functions_bdd_id,
|
|
||||||
:absent_marker_bdd_id
|
|
||||||
]
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Core Set Operations ---
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the union of two types represented by their `Descr` IDs.
|
|
||||||
Returns `{new_typing_ctx, result_descr_id}`.
|
|
||||||
"""
|
|
||||||
def union_types(typing_ctx, descr1_id, descr2_id) do
|
|
||||||
apply_type_op(typing_ctx, :union, descr1_id, descr2_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the intersection of two types represented by their `Descr` IDs.
|
|
||||||
Returns `{new_typing_ctx, result_descr_id}`.
|
|
||||||
"""
|
|
||||||
def intersection_types(typing_ctx, descr1_id, descr2_id) do
|
|
||||||
apply_type_op(typing_ctx, :intersection, descr1_id, descr2_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Computes the negation of a type represented by its `Descr` ID.
|
|
||||||
Returns `{new_typing_ctx, result_descr_id}`.
|
|
||||||
"""
|
|
||||||
def negation_type(typing_ctx, descr_id) do
|
|
||||||
apply_type_op(typing_ctx, :negation, descr_id, nil)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_union_types(typing_ctx, descr1_id, descr2_id) do
|
|
||||||
descr1 = Store.get_descr_by_id(typing_ctx, descr1_id)
|
|
||||||
descr2 = Store.get_descr_by_id(typing_ctx, descr2_id)
|
|
||||||
|
|
||||||
{final_ctx, result_fields_map} =
|
|
||||||
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
|
|
||||||
bdd1_id = Map.get(descr1, field)
|
|
||||||
bdd2_id = Map.get(descr2, field)
|
|
||||||
{new_ctx, result_bdd_id} = BDD.Ops.union_bdds(current_ctx, bdd1_id, bdd2_id)
|
|
||||||
{new_ctx, Map.put(acc_fields, field, result_bdd_id)}
|
|
||||||
end)
|
|
||||||
|
|
||||||
Store.get_or_intern_descr(final_ctx, result_fields_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_intersection_types(typing_ctx, descr1_id, descr2_id) do
|
|
||||||
descr1 = Store.get_descr_by_id(typing_ctx, descr1_id)
|
|
||||||
descr2 = Store.get_descr_by_id(typing_ctx, descr2_id)
|
|
||||||
|
|
||||||
{final_ctx, result_fields_map} =
|
|
||||||
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
|
|
||||||
bdd1_id = Map.get(descr1, field)
|
|
||||||
bdd2_id = Map.get(descr2, field)
|
|
||||||
{new_ctx, result_bdd_id} = BDD.Ops.intersection_bdds(current_ctx, bdd1_id, bdd2_id)
|
|
||||||
{new_ctx, Map.put(acc_fields, field, result_bdd_id)}
|
|
||||||
end)
|
|
||||||
|
|
||||||
Store.get_or_intern_descr(final_ctx, result_fields_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp do_negation_type(typing_ctx, descr_id) do
|
|
||||||
descr = Store.get_descr_by_id(typing_ctx, descr_id)
|
|
||||||
|
|
||||||
{final_ctx, result_fields_map} =
|
|
||||||
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
|
|
||||||
bdd_id = Map.get(descr, field)
|
|
||||||
|
|
||||||
{ctx_after_neg, result_bdd_id} =
|
|
||||||
if field == :absent_marker_bdd_id do
|
|
||||||
{current_ctx, BDD.false_node_id()}
|
|
||||||
else
|
|
||||||
BDD.Ops.negation_bdd(current_ctx, bdd_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
{ctx_after_neg, Map.put(acc_fields, field, result_bdd_id)}
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Re-evaluate context threading if BDD ops significantly alter it beyond caching during reduce.
|
|
||||||
# The primary context update happens with Store.get_or_intern_descr.
|
|
||||||
# The reduce passes current_ctx, which accumulates cache updates from BDD ops.
|
|
||||||
Store.get_or_intern_descr(final_ctx, result_fields_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Caching Wrapper for Type Operations ---
|
|
||||||
defp apply_type_op(typing_ctx, op_key, descr1_id, descr2_id) do
|
|
||||||
cache_key = make_type_op_cache_key(op_key, descr1_id, descr2_id)
|
|
||||||
type_store = Map.get(typing_ctx, :type_store)
|
|
||||||
|
|
||||||
case Map.get(type_store.ops_cache, cache_key) do
|
|
||||||
nil ->
|
|
||||||
# Not in cache, compute it
|
|
||||||
{new_typing_ctx, result_id} =
|
|
||||||
case op_key do
|
|
||||||
:union -> do_union_types(typing_ctx, descr1_id, descr2_id)
|
|
||||||
:intersection -> do_intersection_types(typing_ctx, descr1_id, descr2_id)
|
|
||||||
:negation -> do_negation_type(typing_ctx, descr1_id) # descr2_id is nil here
|
|
||||||
_ -> raise "Unsupported type op_key: #{op_key}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Store in cache (important: use new_typing_ctx to get potentially updated type_store)
|
|
||||||
current_type_store_after_op = Map.get(new_typing_ctx, :type_store)
|
|
||||||
new_ops_cache = Map.put(current_type_store_after_op.ops_cache, cache_key, result_id)
|
|
||||||
final_type_store_with_cache = %{current_type_store_after_op | ops_cache: new_ops_cache}
|
|
||||||
# And put this updated type_store back into new_typing_ctx
|
|
||||||
final_typing_ctx_with_cache = Map.put(new_typing_ctx, :type_store, final_type_store_with_cache)
|
|
||||||
{final_typing_ctx_with_cache, result_id}
|
|
||||||
|
|
||||||
cached_result_id ->
|
|
||||||
{typing_ctx, cached_result_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp make_type_op_cache_key(:negation, descr_id, nil), do: {:negation, descr_id}
|
|
||||||
defp make_type_op_cache_key(op_key, id1, id2) when op_key in [:union, :intersection] do
|
|
||||||
if id1 <= id2, do: {op_key, id1, id2}, else: {op_key, id2, id1}
|
|
||||||
end
|
|
||||||
defp make_type_op_cache_key(op_key, id1, id2), do: {op_key, id1, id2}
|
|
||||||
|
|
||||||
|
|
||||||
# --- Utility Functions ---
|
|
||||||
@doc """
|
|
||||||
Checks if a type represented by its `Descr` ID is the empty type (Nothing).
|
|
||||||
Does not modify `typing_ctx`.
|
|
||||||
"""
|
|
||||||
def is_empty_type?(typing_ctx, descr_id) do
|
|
||||||
descr_map = Store.get_descr_by_id(typing_ctx, descr_id)
|
|
||||||
|
|
||||||
Enum.all?(descr_fields(), fn field ->
|
|
||||||
bdd_id = Map.get(descr_map, field)
|
|
||||||
BDD.is_false_node?(typing_ctx, bdd_id)
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Construction Helper Functions ---
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets the `Descr` ID for the canonical 'Nothing' type.
|
|
||||||
"""
|
|
||||||
def get_type_nothing(typing_ctx) do
|
|
||||||
empty_descr_map = Type.empty_descr(typing_ctx)
|
|
||||||
Store.get_or_intern_descr(typing_ctx, empty_descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets the `Descr` ID for the canonical 'Any' type.
|
|
||||||
"""
|
|
||||||
def get_type_any(typing_ctx) do
|
|
||||||
any_descr_map = Type.any_descr(typing_ctx)
|
|
||||||
Store.get_or_intern_descr(typing_ctx, any_descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Creates a type `Descr` ID representing a single atom literal.
|
|
||||||
"""
|
|
||||||
def create_atom_literal_type(typing_ctx, atom_value) when is_atom(atom_value) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
# Create a BDD for the single atom: Split(atom_value, True, False, False)
|
|
||||||
# The ops_module Tilly.BDD.AtomBoolOps is crucial here.
|
|
||||||
{ctx1, atom_bdd_id} =
|
|
||||||
BDD.Ops.split(typing_ctx, atom_value, true_id, false_id, false_id, Tilly.BDD.AtomBoolOps)
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: atom_bdd_id,
|
|
||||||
integers_bdd_id: false_id,
|
|
||||||
strings_bdd_id: false_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
|
|
||||||
Store.get_or_intern_descr(ctx1, descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Creates a type `Descr` ID representing a single integer literal.
|
|
||||||
"""
|
|
||||||
def create_integer_literal_type(typing_ctx, integer_value) when is_integer(integer_value) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
{ctx1, integer_bdd_id} =
|
|
||||||
BDD.Ops.split(typing_ctx, integer_value, true_id, false_id, false_id, Tilly.BDD.IntegerBoolOps)
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: false_id,
|
|
||||||
integers_bdd_id: integer_bdd_id,
|
|
||||||
strings_bdd_id: false_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
Store.get_or_intern_descr(ctx1, descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Creates a type `Descr` ID representing a single string literal.
|
|
||||||
"""
|
|
||||||
def create_string_literal_type(typing_ctx, string_value) when is_binary(string_value) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
{ctx1, string_bdd_id} =
|
|
||||||
BDD.Ops.split(typing_ctx, string_value, true_id, false_id, false_id, Tilly.BDD.StringBoolOps)
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: false_id,
|
|
||||||
integers_bdd_id: false_id,
|
|
||||||
strings_bdd_id: string_bdd_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
Store.get_or_intern_descr(ctx1, descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets the `Descr` ID for the type representing all atoms.
|
|
||||||
"""
|
|
||||||
def get_primitive_type_any_atom(typing_ctx) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id() # This BDD must be interned with :atom_bool_ops if it's not universal
|
|
||||||
|
|
||||||
# For a BDD representing "all atoms", its structure is simply True,
|
|
||||||
# but it must be associated with :atom_bool_ops.
|
|
||||||
# BDD.true_node_id() is universal. If we need a specific "true for atoms",
|
|
||||||
# we'd intern it: BDD.get_or_intern_node(ctx, Node.mk_true(), :atom_bool_ops)
|
|
||||||
# However, BDD.Ops functions fetch ops_module from operands.
|
|
||||||
# Universal true/false should work correctly.
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: true_id,
|
|
||||||
integers_bdd_id: false_id,
|
|
||||||
strings_bdd_id: false_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
Store.get_or_intern_descr(typing_ctx, descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets the `Descr` ID for the type representing all integers.
|
|
||||||
"""
|
|
||||||
def get_primitive_type_any_integer(typing_ctx) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: false_id,
|
|
||||||
integers_bdd_id: true_id,
|
|
||||||
strings_bdd_id: false_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
Store.get_or_intern_descr(typing_ctx, descr_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets the `Descr` ID for the type representing all strings.
|
|
||||||
"""
|
|
||||||
def get_primitive_type_any_string(typing_ctx) do
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
descr_map = %{
|
|
||||||
atoms_bdd_id: false_id,
|
|
||||||
integers_bdd_id: false_id,
|
|
||||||
strings_bdd_id: true_id,
|
|
||||||
pairs_bdd_id: false_id,
|
|
||||||
records_bdd_id: false_id,
|
|
||||||
functions_bdd_id: false_id,
|
|
||||||
absent_marker_bdd_id: false_id
|
|
||||||
}
|
|
||||||
Store.get_or_intern_descr(typing_ctx, descr_map)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,79 +0,0 @@
|
|||||||
defmodule Tilly.Type.Store do
|
|
||||||
@moduledoc """
|
|
||||||
Manages the interning (hash-consing) of Type Descriptor maps (`Descr` maps).
|
|
||||||
Ensures that for any unique `Descr` map, there is one canonical integer ID.
|
|
||||||
The type store is expected to be part of a `typing_ctx` map under the key `:type_store`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@initial_next_descr_id 0
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Initializes the type store within the typing context.
|
|
||||||
"""
|
|
||||||
def init_type_store(typing_ctx) when is_map(typing_ctx) do
|
|
||||||
type_store = %{
|
|
||||||
descrs_by_structure: %{},
|
|
||||||
structures_by_id: %{},
|
|
||||||
next_descr_id: @initial_next_descr_id,
|
|
||||||
ops_cache: %{} # Cache for type operations {op_key, descr_id1, descr_id2} -> result_descr_id
|
|
||||||
}
|
|
||||||
|
|
||||||
Map.put(typing_ctx, :type_store, type_store)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Gets an existing Type Descriptor ID or interns a new one if it's not already in the store.
|
|
||||||
|
|
||||||
All BDD IDs within the `descr_map` must already be canonical integer IDs.
|
|
||||||
|
|
||||||
Returns a tuple `{new_typing_ctx, descr_id}`.
|
|
||||||
The `typing_ctx` is updated if a new `Descr` is interned.
|
|
||||||
"""
|
|
||||||
def get_or_intern_descr(typing_ctx, descr_map) do
|
|
||||||
type_store = Map.get(typing_ctx, :type_store)
|
|
||||||
|
|
||||||
unless type_store do
|
|
||||||
raise ArgumentError, "Type store not initialized in typing_ctx. Call init_type_store first."
|
|
||||||
end
|
|
||||||
|
|
||||||
# The descr_map itself is the key for interning.
|
|
||||||
# Assumes BDD IDs within descr_map are already canonical.
|
|
||||||
case Map.get(type_store.descrs_by_structure, descr_map) do
|
|
||||||
nil ->
|
|
||||||
# Descr not found, intern it
|
|
||||||
descr_id = type_store.next_descr_id
|
|
||||||
|
|
||||||
new_descrs_by_structure = Map.put(type_store.descrs_by_structure, descr_map, descr_id)
|
|
||||||
new_structures_by_id = Map.put(type_store.structures_by_id, descr_id, descr_map)
|
|
||||||
new_next_descr_id = descr_id + 1
|
|
||||||
|
|
||||||
new_type_store =
|
|
||||||
%{
|
|
||||||
type_store
|
|
||||||
| descrs_by_structure: new_descrs_by_structure,
|
|
||||||
structures_by_id: new_structures_by_id,
|
|
||||||
next_descr_id: new_next_descr_id
|
|
||||||
}
|
|
||||||
|
|
||||||
new_typing_ctx = Map.put(typing_ctx, :type_store, new_type_store)
|
|
||||||
{new_typing_ctx, descr_id}
|
|
||||||
|
|
||||||
existing_descr_id ->
|
|
||||||
# Descr found
|
|
||||||
{typing_ctx, existing_descr_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Retrieves the `Descr` map from the type store given its ID.
|
|
||||||
Returns the `Descr` map or `nil` if not found.
|
|
||||||
"""
|
|
||||||
def get_descr_by_id(typing_ctx, descr_id) do
|
|
||||||
with %{type_store: %{structures_by_id: structures_by_id}} <- typing_ctx,
|
|
||||||
descr when not is_nil(descr) <- Map.get(structures_by_id, descr_id) do
|
|
||||||
descr
|
|
||||||
else
|
|
||||||
_ -> nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
31
mix.exs
31
mix.exs
@ -1,31 +0,0 @@
|
|||||||
defmodule Til.MixProject do
|
|
||||||
use Mix.Project
|
|
||||||
|
|
||||||
def project do
|
|
||||||
[
|
|
||||||
app: :pl,
|
|
||||||
version: "0.1.0",
|
|
||||||
elixir: "~> 1.15",
|
|
||||||
start_permanent: Mix.env() == :prod,
|
|
||||||
elixirc_paths: elixirc_paths(Mix.env()),
|
|
||||||
deps: deps()
|
|
||||||
]
|
|
||||||
end
|
|
||||||
|
|
||||||
defp elixirc_paths(:test), do: ["lib", "test/support"]
|
|
||||||
defp elixirc_paths(_), do: ["lib"]
|
|
||||||
# Run "mix help compile.app" to learn about applications.
|
|
||||||
def application do
|
|
||||||
[
|
|
||||||
extra_applications: [:logger]
|
|
||||||
]
|
|
||||||
end
|
|
||||||
|
|
||||||
# Run "mix help deps" to learn about dependencies.
|
|
||||||
defp deps do
|
|
||||||
[
|
|
||||||
# {:dep_from_hexpm, "~> 0.3.0"},
|
|
||||||
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"}
|
|
||||||
]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
365
out
Normal file
365
out
Normal file
@ -0,0 +1,365 @@
|
|||||||
|
|
||||||
|
######################## TDD FOR SET-THEORETIC TYPES (v2 Path Edition) ########################
|
||||||
|
#
|
||||||
|
# 0. High-level summary ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
|
||||||
|
#
|
||||||
|
# • Each decision-node variable is a **path** - a list of **steps** that describe how to
|
||||||
|
# reach a primitive predicate inside a value.
|
||||||
|
#
|
||||||
|
# [
|
||||||
|
# {:tag, :is_tuple}, # primary discriminator
|
||||||
|
# {:field, 0}, # tuple element 0
|
||||||
|
# {:primitive, {:value_eq, lit(:foo)}} # primitive test on that element
|
||||||
|
# ]
|
||||||
|
#
|
||||||
|
# • **Global order** is Erlang term ordering on the path list (lexicographic).
|
||||||
|
# Canonicality proofs need no extra machinery.
|
||||||
|
#
|
||||||
|
# • **Step alphabet (finite):**
|
||||||
|
# {:tag, primary_atom}
|
||||||
|
# {:field, index} – tuple element
|
||||||
|
# {:head} | {:tail} – list
|
||||||
|
# {:key, lit_id} | {:has_key, lit_id} – map predicates
|
||||||
|
# {:primitive, primitive_test}
|
||||||
|
# {:typevar, var_atom}
|
||||||
|
#
|
||||||
|
# • Literals are interned to small integers via `Tdd.Path.lit/1`. No runtime node-IDs ever
|
||||||
|
# appear in a variable key.
|
||||||
|
#
|
||||||
|
# • Everything else (hash-consing, operations, tests) works unchanged.
|
||||||
|
#
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
defmodule Tdd.Path do
|
||||||
|
@moduledoc """
|
||||||
|
Helper for building and analysing **path-based predicate keys**.
|
||||||
|
|
||||||
|
A *path* is a list of *steps* (tuples). Erlang term ordering on the list is
|
||||||
|
the global order required for canonical TDDs.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# -- Literal interning -------------------------------------------------------
|
||||||
|
|
||||||
|
@lit_table :tdd_literal_table
|
||||||
|
|
||||||
|
def start_link, do: :ets.new(@lit_table, [:named_table, :set, :public])
|
||||||
|
|
||||||
|
@doc "Return a stable small integer for any literal (atom/int/binary/etc.)"
|
||||||
|
def lit(x) do
|
||||||
|
case :ets.lookup(@lit_table, x) do
|
||||||
|
[{^x, id}] ->
|
||||||
|
id
|
||||||
|
|
||||||
|
[] ->
|
||||||
|
id = :erlang.unique_integer([:positive, :monotonic])
|
||||||
|
:ets.insert(@lit_table, {x, id})
|
||||||
|
id
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# -- Step constructors -------------------------------------------------------
|
||||||
|
|
||||||
|
def tag(t), do: [ {:tag, t} ]
|
||||||
|
def is_atom, do: tag(:is_atom)
|
||||||
|
def is_tuple, do: tag(:is_tuple)
|
||||||
|
def is_integer, do: tag(:is_integer)
|
||||||
|
def is_list, do: tag(:is_list)
|
||||||
|
|
||||||
|
def value_eq(val), do: [ {:primitive, {:value_eq, lit(val)}} ]
|
||||||
|
def int_interval({lo,hi}),do: [ {:primitive, {:interval, :int, {lo,hi}}} ]
|
||||||
|
|
||||||
|
def tuple_size_eq(n),
|
||||||
|
do: [ {:tag,:is_tuple}, {:primitive,{:length,:tuple,n}} ]
|
||||||
|
|
||||||
|
def tuple_field(i, inner_path),
|
||||||
|
do: [ {:tag,:is_tuple}, {:field,i} | inner_path ]
|
||||||
|
|
||||||
|
def list_is_empty, do: [ {:tag,:is_list}, {:primitive,{:length,:list,0}} ]
|
||||||
|
def list_head(inner), do: [ {:tag,:is_list}, {:head} | inner ]
|
||||||
|
def list_tail(inner), do: [ {:tag,:is_list}, {:tail} | inner ]
|
||||||
|
def list_all(inner_tid), do: [ {:tag,:is_list}, {:primitive, {:all_elements, inner_tid}} ]
|
||||||
|
|
||||||
|
# Type variables
|
||||||
|
def typevar(v), do: [ {:typevar, v} ]
|
||||||
|
|
||||||
|
# -- Path inspection utilities ----------------------------------------------
|
||||||
|
|
||||||
|
def primary_tag([{:tag, t} | _]), do: t
|
||||||
|
def primary_tag(_), do: :unknown
|
||||||
|
|
||||||
|
def primitive([{:primitive, p}]), do: p
|
||||||
|
def primitive([_ | rest]), do: primitive(rest)
|
||||||
|
def primitive(_), do: nil
|
||||||
|
|
||||||
|
def starts_with?(path, prefix), do: Enum.take(path, length(prefix)) == prefix
|
||||||
|
end
|
||||||
|
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
defmodule Tdd.Core do
|
||||||
|
@moduledoc """
|
||||||
|
Core hash-consed DAG engine (unchanged except it now stores *paths* as variables).
|
||||||
|
"""
|
||||||
|
|
||||||
|
# … (IDENTICAL to previous Core; unchanged code elided for brevity)
|
||||||
|
# Copy your previous `Tdd.Core` implementation here verbatim – it already
|
||||||
|
# treats the variable term as opaque, so no edits are required.
|
||||||
|
end
|
||||||
|
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
defmodule Tdd.Variables do
|
||||||
|
@moduledoc false
|
||||||
|
alias Tdd.Path
|
||||||
|
|
||||||
|
# Primary tags
|
||||||
|
def v_is_atom, do: Path.is_atom()
|
||||||
|
def v_is_tuple, do: Path.is_tuple()
|
||||||
|
def v_is_integer, do: Path.is_integer()
|
||||||
|
def v_is_list, do: Path.is_list()
|
||||||
|
|
||||||
|
# Atom predicates
|
||||||
|
def v_atom_eq(a), do: Path.is_atom() ++ Path.value_eq(a)
|
||||||
|
|
||||||
|
# Integer predicates (encode eq/lt/gt via intervals)
|
||||||
|
def v_int_eq(n), do: Path.is_integer() ++ Path.int_interval({n,n})
|
||||||
|
def v_int_lt(n), do: Path.is_integer() ++ Path.int_interval({:neg_inf,n-1})
|
||||||
|
def v_int_gt(n), do: Path.is_integer() ++ Path.int_interval({n+1,:pos_inf})
|
||||||
|
|
||||||
|
# Tuple predicates
|
||||||
|
def v_tuple_size_eq(n), do: Path.tuple_size_eq(n)
|
||||||
|
def v_tuple_elem_pred(i, inner_var_path),
|
||||||
|
do: Path.tuple_field(i, inner_var_path)
|
||||||
|
|
||||||
|
# List predicates
|
||||||
|
def v_list_is_empty, do: Path.list_is_empty()
|
||||||
|
def v_list_head_pred(inner), do: Path.list_head(inner)
|
||||||
|
def v_list_tail_pred(inner), do: Path.list_tail(inner)
|
||||||
|
def v_list_all_elements_are(tid), do: Path.list_all(tid)
|
||||||
|
end
|
||||||
|
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
defmodule Tdd.PredicateLogic do
|
||||||
|
@moduledoc false
|
||||||
|
alias Tdd.Path
|
||||||
|
alias Tdd.Variables, as: V
|
||||||
|
|
||||||
|
# 1. Mutual exclusivity of primary tags
|
||||||
|
@primary [:is_atom, :is_tuple, :is_integer, :is_list]
|
||||||
|
@primary_pairs for a <- @primary, b <- @primary, a < b, do: {a,b}
|
||||||
|
|
||||||
|
# ------------ public API ----------------------------------------------------
|
||||||
|
|
||||||
|
def saturate(facts) do
|
||||||
|
with {:ok, s} <- static_exclusions(facts),
|
||||||
|
:ok <- further_checks(s) do
|
||||||
|
{:ok, s}
|
||||||
|
else
|
||||||
|
_ -> :contradiction
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
def check_implication(var, constraints) do
|
||||||
|
case {Path.primary_tag(var), Path.primitive(var)} do
|
||||||
|
{:is_atom, {:value_eq, id}} ->
|
||||||
|
atom_implication(id, constraints)
|
||||||
|
|
||||||
|
{:is_tuple, {:length, :tuple, n}} ->
|
||||||
|
tuple_size_implication(n, constraints)
|
||||||
|
|
||||||
|
{:is_integer, {:interval, :int, intv}} ->
|
||||||
|
int_implication(intv, constraints)
|
||||||
|
|
||||||
|
_ ->
|
||||||
|
:unknown
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# ------------ static exclusions --------------------------------------------
|
||||||
|
|
||||||
|
defp static_exclusions(facts) do
|
||||||
|
Enum.reduce_while(@primary_pairs, {:ok, facts}, fn {a,b}, {:ok, acc} ->
|
||||||
|
v_a = V.v_is_atom() |> path_with_tag(a)
|
||||||
|
v_b = V.v_is_atom() |> path_with_tag(b)
|
||||||
|
case {Map.get(acc, v_a), Map.get(acc, v_b)} do
|
||||||
|
{true, true} -> {:halt, :contradiction}
|
||||||
|
_ -> {:cont, {:ok, acc}}
|
||||||
|
end
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
defp path_with_tag(_sample, tag), do: [{:tag, tag}] # helper
|
||||||
|
|
||||||
|
# ------------ fine-grained checks ------------------------------------------
|
||||||
|
|
||||||
|
defp further_checks(facts) do
|
||||||
|
cond do
|
||||||
|
atom_val_conflict?(facts) -> :contradiction
|
||||||
|
tuple_size_conflict?(facts) -> :contradiction
|
||||||
|
int_interval_conflict?(facts) -> :contradiction
|
||||||
|
list_structure_conflict?(facts)-> :contradiction
|
||||||
|
true -> :ok
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# atom value clash
|
||||||
|
defp atom_val_conflict?(facts) do
|
||||||
|
vals =
|
||||||
|
for {path,true} <- facts, Path.primary_tag(path)==:is_atom,
|
||||||
|
{:value_eq,id} = Path.primitive(path), do: id
|
||||||
|
Enum.uniq(vals) |> length > 1
|
||||||
|
end
|
||||||
|
|
||||||
|
# tuple size clash
|
||||||
|
defp tuple_size_conflict?(facts) do
|
||||||
|
sizes = for {p,true} <- facts, {:length,:tuple,n}=Path.primitive(p), do: n
|
||||||
|
Enum.uniq(sizes) |> length > 1
|
||||||
|
end
|
||||||
|
|
||||||
|
# integer interval contradiction helper
|
||||||
|
defp int_interval_conflict?(facts) do
|
||||||
|
intervals =
|
||||||
|
for {p,true} <- facts, {:interval,:int,intv}=Path.primitive(p), do: intv
|
||||||
|
|
||||||
|
case intervals do
|
||||||
|
[] -> false
|
||||||
|
_ ->
|
||||||
|
Enum.reduce_while(intervals, {:neg_inf,:pos_inf}, fn
|
||||||
|
{:neg_inf,hi}, {:neg_inf,cur_hi} when hi < cur_hi -> {:cont, {:neg_inf,hi}}
|
||||||
|
{lo,:pos_inf}, {cur_lo,:pos_inf} when lo > cur_lo -> {:cont, {lo,:pos_inf}}
|
||||||
|
{lo1,hi1}, {lo2,hi2} ->
|
||||||
|
lo = max(lo1,lo2); hi = min(hi1,hi2)
|
||||||
|
if compare_bounds(lo,hi)<=0, do: {:cont,{lo,hi}}, else: {:halt,:conflict}
|
||||||
|
end) == :conflict
|
||||||
|
end
|
||||||
|
end
|
||||||
|
defp compare_bounds(:neg_inf,_), do: -1
|
||||||
|
defp compare_bounds(_, :pos_inf), do: -1
|
||||||
|
defp compare_bounds(a,b), do: a-b
|
||||||
|
|
||||||
|
# list head/tail vs empty
|
||||||
|
defp list_structure_conflict?(facts) do
|
||||||
|
empty? = Map.get(facts, V.v_list_is_empty()) == true
|
||||||
|
if empty? do
|
||||||
|
Enum.any?(facts, fn {p,_} -> Path.starts_with?(p, Path.list_head([])) or
|
||||||
|
Path.starts_with?(p, Path.list_tail([])) end)
|
||||||
|
else
|
||||||
|
false
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# ------------ implication helpers ------------------------------------------
|
||||||
|
|
||||||
|
defp atom_implication(id, constr) do
|
||||||
|
case {Map.get(constr, V.v_atom_eq(id)), find_any_atom_true(constr)} do
|
||||||
|
{true,_} -> true
|
||||||
|
{false,_} -> false
|
||||||
|
{nil,true_id} when true_id != id -> false
|
||||||
|
_ -> :unknown
|
||||||
|
end
|
||||||
|
end
|
||||||
|
defp find_any_atom_true(constr) do
|
||||||
|
Enum.find_value(constr, fn
|
||||||
|
{p,true} when Path.primary_tag(p)==:is_atom ->
|
||||||
|
case Path.primitive(p) do {:value_eq,id}->id; _->nil end
|
||||||
|
_->nil
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
|
||||||
|
defp tuple_size_implication(n, constr) do
|
||||||
|
case {Map.get(constr, V.v_tuple_size_eq(n)),
|
||||||
|
any_tuple_size_true(constr)} do
|
||||||
|
{true,_} -> true
|
||||||
|
{false,_} -> false
|
||||||
|
{nil,true_n} when true_n != n -> false
|
||||||
|
_ -> :unknown
|
||||||
|
end
|
||||||
|
end
|
||||||
|
defp any_tuple_size_true(constr) do
|
||||||
|
Enum.find_value(constr, fn
|
||||||
|
{p,true} ->
|
||||||
|
case Path.primitive(p) do {:length,:tuple,sz}->sz; _->nil end
|
||||||
|
_->nil
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
|
||||||
|
defp int_implication(intv, constr) do
|
||||||
|
# intersect current interval with candidate; if unchanged ⇒ implied
|
||||||
|
with {:ok,{lo,hi}} <- current_int_interval(constr) do
|
||||||
|
if subset?(lo,hi,intv), do: true, else: :unknown
|
||||||
|
else
|
||||||
|
:none -> :unknown
|
||||||
|
:contradiction -> true
|
||||||
|
end
|
||||||
|
end
|
||||||
|
defp current_int_interval(constr) do
|
||||||
|
intvs = for {p,true} <- constr,
|
||||||
|
{:interval,:int,iv}=Path.primitive(p), do: iv
|
||||||
|
case intvs do
|
||||||
|
[]-> :none
|
||||||
|
list->
|
||||||
|
Enum.reduce_while(list,{:neg_inf,:pos_inf},fn
|
||||||
|
{:neg_inf,hi}, {:neg_inf,cur_hi} -> {:cont,{:neg_inf,min(hi,cur_hi)}}
|
||||||
|
{lo,:pos_inf}, {cur_lo,:pos_inf} -> {:cont,{max(lo,cur_lo),:pos_inf}}
|
||||||
|
{lo1,hi1}, {lo2,hi2} ->
|
||||||
|
lo=max(lo1,lo2); hi=min(hi1,hi2)
|
||||||
|
if compare_bounds(lo,hi)<=0, do: {:cont,{lo,hi}}, else: {:halt,:contradiction}
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
defp subset?(lo,hi,{lo2,hi2}) do
|
||||||
|
compare_bounds(lo2,lo)<=0 and compare_bounds(hi,hi2)<=0
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
# >>> PUBLIC API – same as before but calling new variable builders <<<
|
||||||
|
|
||||||
|
defmodule Tdd do
|
||||||
|
@moduledoc """
|
||||||
|
Public API (constructors, ops, tests) – rewritten to call **path-based variables**.
|
||||||
|
"""
|
||||||
|
|
||||||
|
alias Tdd.Core
|
||||||
|
alias Tdd.Variables, as: V
|
||||||
|
alias Tdd.PredicateLogic
|
||||||
|
|
||||||
|
# …… UNCHANGED operational code (sum/intersect/negate/simplify)… copy from previous file
|
||||||
|
# only change: every old `{cat,…}` literal replaced with `V.*` helpers
|
||||||
|
|
||||||
|
# -- System init ------------------------------------------------------------
|
||||||
|
def init_tdd_system do
|
||||||
|
Tdd.Path.start_link()
|
||||||
|
Core.init()
|
||||||
|
end
|
||||||
|
|
||||||
|
# -- Constructors (use new variable paths) ----------------------------------
|
||||||
|
def type_any, do: Core.true_id()
|
||||||
|
def type_none, do: Core.false_id()
|
||||||
|
|
||||||
|
def type_atom, do: Core.make_node(V.v_is_atom(), type_any(), type_none(), type_none())
|
||||||
|
def type_tuple, do: Core.make_node(V.v_is_tuple(), type_any(), type_none(), type_none())
|
||||||
|
def type_integer, do: Core.make_node(V.v_is_integer(), type_any(), type_none(), type_none())
|
||||||
|
def type_list, do: Core.make_node(V.v_is_list(), type_any(), type_none(), type_none())
|
||||||
|
|
||||||
|
def type_atom_literal(a),
|
||||||
|
do: intersect(type_atom(), Core.make_node(V.v_atom_eq(a), type_any(), type_none(), type_none()))
|
||||||
|
|
||||||
|
def type_int_eq(n), do: intersect(type_integer(), Core.make_node(V.v_int_eq(n), type_any(), type_none(), type_none()))
|
||||||
|
def type_int_lt(n), do: intersect(type_integer(), Core.make_node(V.v_int_lt(n), type_any(), type_none(), type_none()))
|
||||||
|
def type_int_gt(n), do: intersect(type_integer(), Core.make_node(V.v_int_gt(n), type_any(), type_none(), type_none()))
|
||||||
|
|
||||||
|
def type_empty_tuple,
|
||||||
|
do: intersect(type_tuple(), Core.make_node(V.v_tuple_size_eq(0), type_any(), type_none(), type_none()))
|
||||||
|
|
||||||
|
# … rest of constructors identical, but all variable references changed to V.* …
|
||||||
|
|
||||||
|
# NOTE: The big bodies of intersect/sum/negate/simplify etc. are COPY-PASTED from
|
||||||
|
# the previous file *unchanged* – they already treat variables opaquely.
|
||||||
|
end
|
||||||
|
|
||||||
|
# ----------------------------------------------------------------------------------------------
|
||||||
|
# >>> TEST SUITE – unchanged behaviour <<<
|
||||||
|
|
||||||
|
# Copy all existing tests exactly – they rely on public API, not internals.
|
||||||
|
# (Omitted here to save space; paste from original file.)
|
||||||
|
|
||||||
|
###############################################################################################
|
||||||
|
|
||||||
9
session
9
session
@ -1,9 +0,0 @@
|
|||||||
/drop
|
|
||||||
/add lib/til/typer.ex
|
|
||||||
/add lib/til/typer/environment.ex
|
|
||||||
/add lib/til/typer/expression_typer.ex
|
|
||||||
/add lib/til/typer/interner.ex
|
|
||||||
/add lib/til/typer/subtype_checker.ex
|
|
||||||
/add lib/til/typer/types.ex
|
|
||||||
/add project.md
|
|
||||||
/read-only Conventions.md
|
|
||||||
99
set_types
99
set_types
@ -1,99 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
The core idea is to replace the current map-based type representations with a system centered around **canonicalized type descriptors (`Tilly.Type.Descr`)**, where each descriptor internally uses **Binary Decision Diagrams (BDDs)** to represent sets of values for different kinds of types (integers, atoms, pairs, etc.).
|
|
||||||
|
|
||||||
Here's a plan for adaptation:
|
|
||||||
|
|
||||||
** overarching concerns:
|
|
||||||
|
|
||||||
- lets use bare maps instead of structs. Structs are closed, and I want to keep the option of adding new fields later by the user of the compiler
|
|
||||||
|
|
||||||
**I. Foundational BDD and Type Descriptor Infrastructure (New Modules)**
|
|
||||||
|
|
||||||
This phase focuses on building the core components described in `set_types.md`. These will mostly be new modules.
|
|
||||||
|
|
||||||
1. **`Tilly.BDD.Node` (New):**
|
|
||||||
* Define the Elixir tagged tuples for BDD nodes: `false`, `true`, `{:leaf, leaf_value_id}`, `{:split, element_id, p_child_id, i_child_id, n_child_id}`.
|
|
||||||
* include smart constructors
|
|
||||||
2. **`Tilly.BDD.Store` (New):**
|
|
||||||
* Implement the hash-consing mechanism for BDD nodes.
|
|
||||||
* This store will be part of the compiler `ctx` map.
|
|
||||||
* Key function: `get_or_intern_node(typing_ctx, logical_structure_tuple) :: {new_typing_ctx, node_id}`.
|
|
||||||
3. **`Tilly.BDD.ElementProtocol` and `Tilly.BDD.LeafProtocol` (New):**
|
|
||||||
* Define these protocols. Implementations will be provided for basic elements (e.g., atoms, integers for splitting) and basic leaves (e.g., booleans for simple set membership).
|
|
||||||
4. **`Tilly.BDD.Ops` (New):**
|
|
||||||
* Implement core BDD operations: `union_bdds`, `intersection_bdds`, `negation_bdd`, `difference_bdd`.
|
|
||||||
* These functions will take BDD node IDs and the `typing_ctx`, returning a new BDD node ID.
|
|
||||||
* Implement smart constructors for `Split` and `Leaf` nodes that use the `Tilly.BDD.Store` and apply simplification rules.
|
|
||||||
5. **`Tilly.Type.Descr` (New):**
|
|
||||||
* Define the Elixir struct. It will hold IDs of canonical BDDs for each basic kind (e.g., `:atoms_bdd_id`, `:integers_bdd_id`, `:pairs_bdd_id`, etc., mirroring CDuce's `descr` fields).
|
|
||||||
6. **`Tilly.Type.Store` (New or Replaces `Til.Typer.Interner` logic for types):**
|
|
||||||
* Manages the interning of `Tilly.Type.Descr` structs. A `Descr` is canonical if its constituent BDD IDs are canonical and the combination is unique.
|
|
||||||
* This store will also be part of the `typing_ctx`.
|
|
||||||
7. **`Tilly.Type.Ops` (New):**
|
|
||||||
* Implement `union_types(descr1_id, descr2_id, typing_ctx)`, `intersection_types`, `negation_type`. These operate on `Descr` IDs by performing field-wise BDD operations using `Tilly.BDD.Ops`, then interning the resulting `Descr`.
|
|
||||||
|
|
||||||
**II. Adapting Existing Typer Modules**
|
|
||||||
|
|
||||||
This phase involves refactoring existing modules to use the new infrastructure. The `typing_ctx` will need to be threaded through all relevant type-checking functions.
|
|
||||||
|
|
||||||
1. **`Til.Typer.Types` (Major Refactor):**
|
|
||||||
* Functions like `get_primitive_type(:integer)` will now use the new infrastructure to construct/retrieve an interned `Tilly.Type.Descr` ID for the integer type (e.g., a `Descr` where `integers_bdd_id` points to a "true" BDD for integers, and others are "false").
|
|
||||||
* The concept of predefined type *maps* will be replaced by predefined canonical `Descr` IDs.
|
|
||||||
2. **`Til.Typer.Interner` (Major Refactor/Integration with `Tilly.Type.Store` and `Tilly.BDD.Store`):**
|
|
||||||
* Its role in interning type *maps* will be replaced. The new stores will handle BDD node and `Descr` interning.
|
|
||||||
* `populate_known_types` will pre-populate the `typing_ctx` with canonical `Descr` IDs for base types like `any`, `integer`, `atom`, `nothing`.
|
|
||||||
3. **`Til.Typer` and `Til.Typer.ExpressionTyper` (Significant Refactor):**
|
|
||||||
* `infer_type_for_node_ast` (in `ExpressionTyper` or similar):
|
|
||||||
* For literals (e.g., `123`, `:foo`), it will construct the corresponding `Descr` ID (e.g., an integer `Descr` with a BDD representing only `123`).
|
|
||||||
* For operations that combine types (e.g., if an `if` expression's branches have types A and B, the result is `A | B`), it will use `Tilly.Type.Ops.union_types`.
|
|
||||||
* `resolve_type_specifier_node`:
|
|
||||||
* For basic type names (`Integer`, `Atom`), it will fetch their canonical `Descr` IDs.
|
|
||||||
* For type expressions like `(or TypeA TypeB)`, it will recursively resolve `TypeA` and `TypeB` to `Descr` IDs, then use `Tilly.Type.Ops.union_types`. Similarly for `(and ...)` and `(not ...)`.
|
|
||||||
* The `type_id` field in AST nodes will now store the ID of an interned `Tilly.Type.Descr`.
|
|
||||||
4. **`Til.Typer.SubtypeChecker` (Complete Rewrite):**
|
|
||||||
* `is_subtype?(descr_a_id, descr_b_id, typing_ctx)` will be implemented as:
|
|
||||||
1. `descr_not_b_id = Tilly.Type.Ops.negation_type(descr_b_id, typing_ctx)`.
|
|
||||||
2. `intersection_id = Tilly.Type.Ops.intersection_types(descr_a_id, descr_not_b_id, typing_ctx)`.
|
|
||||||
3. `is_empty_type(intersection_id, typing_ctx)`.
|
|
||||||
* `is_empty_type(descr_id, typing_ctx)` checks if all BDDs within the `Descr` are the canonical `False` BDD.
|
|
||||||
|
|
||||||
**III. Handling Specific Type Kinds from `project.md`**
|
|
||||||
|
|
||||||
The existing type kinds in `project.md` need to be mapped to the new `Descr` model:
|
|
||||||
|
|
||||||
* **Primitive Types:** `%{type_kind: :primitive, name: :integer}` becomes a canonical `Descr` ID where `integers_bdd_id` is "all integers" and other BDDs are empty.
|
|
||||||
* **Literal Types:** `%{type_kind: :literal, value: 42}` becomes a `Descr` ID where `integers_bdd_id` represents only `42`.
|
|
||||||
* **Union/Intersection/Negation Types:** These are no longer explicit `type_kind`s. They are results of operations in `Tilly.Type.Ops` combining other `Descr` objects.
|
|
||||||
* **Function, List, Tuple, Map Types:**
|
|
||||||
* These will be represented by `Descr` objects where the corresponding BDD (e.g., `functions_bdd_id`, `pairs_bdd_id` for lists/tuples, `records_bdd_id` for maps) is non-empty.
|
|
||||||
* The structure of these BDDs will be more complex, as outlined in CDuce (e.g., BDDs whose leaves are other BDDs, or BDDs splitting on type variables/labels). This is a more advanced step.
|
|
||||||
* Initially, `list_type(element_descr_id)` might create a `Descr` where `pairs_bdd_id` points to a BDD representing pairs whose first element matches `element_descr_id` and second is recursively a list or nil.
|
|
||||||
* The detailed map type representation from `project.md` (`known_elements`, `index_signature`) will need to be encoded into the structure of the `records_bdd_id`. This will be a complex part, drawing heavily from how CDuce handles record types with BDDs.
|
|
||||||
|
|
||||||
**IV. Parser (`lib/til/parser.ex`)**
|
|
||||||
|
|
||||||
* No immediate changes are required for the AST structure itself.
|
|
||||||
* If new syntax for type operations (e.g., `(or Int Atom)`) is desired directly in source code (as opposed to only being used by `resolve_type_specifier_node`), the parser will need to be updated to produce AST nodes that `resolve_type_specifier_node` can interpret.
|
|
||||||
|
|
||||||
**V. Documentation (`project.md`)**
|
|
||||||
|
|
||||||
* The "Type Representation" sections will need a complete overhaul to describe the new `Tilly.Type.Descr` and BDD-based system.
|
|
||||||
|
|
||||||
**Suggested Phased Approach:**
|
|
||||||
|
|
||||||
1. **Phase 0: Core BDD Infrastructure:** Implement `Tilly.BDD.{Node, Store, Ops, Protocols}`. Focus on BDDs for simple sets first (e.g., sets of atoms, integers).
|
|
||||||
2. **Phase 1: `Tilly.Type.Descr` and Basic Types:** Implement `Tilly.Type.Descr`, `Tilly.Type.Store`, and `Tilly.Type.Ops`. Implement construction of `Descr` IDs for primitives (`any`, `nothing`, `integer`, `atom`) and literals.
|
|
||||||
3. **Phase 2: Typer Integration for Primitives:** Refactor `Til.Typer.Types`, `Til.Typer.Interner`, and parts of `Til.Typer.ExpressionTyper` to use the new system for primitive and literal types. Update `resolve_type_specifier_node` for basic type names and simple `(or)`, `(and)`, `(not)` forms.
|
|
||||||
4. **Phase 3: Subtyping:** Implement the new `Til.Typer.SubtypeChecker`.
|
|
||||||
5. **Phase 4: Constructed Types (Iterative):**
|
|
||||||
* Start with pairs/tuples, then lists. Design their BDD representations within `Tilly.Type.Descr`.
|
|
||||||
* Tackle records/maps. This will be challenging and require careful mapping of `project.md`'s map features to CDuce-style record BDDs.
|
|
||||||
* Address function types.
|
|
||||||
6. **Phase 5: Advanced Features:** Polymorphism (type variables in BDDs), refinement types.
|
|
||||||
|
|
||||||
This is a substantial refactoring. The key is to build the BDD foundation correctly, as everything else will depend on it. The `typing_ctx` will become a critical piece of data passed throughout the type system.
|
|
||||||
|
|
||||||
> Tokens: 77k sent, 2.3k received. Cost: $0.12 message, $0.15 session.
|
|
||||||
|
|
||||||
#### /edit
|
|
||||||
@ -1,195 +0,0 @@
|
|||||||
defmodule Til.TestHelpers do
|
|
||||||
import ExUnit.Assertions
|
|
||||||
|
|
||||||
alias Til.Parser
|
|
||||||
# For type-checking related helpers
|
|
||||||
alias Til.Typer
|
|
||||||
alias Til.AstUtils
|
|
||||||
|
|
||||||
# --- Node Access Helpers ---
|
|
||||||
|
|
||||||
def get_file_node_from_map(nodes_map) do
|
|
||||||
file_node =
|
|
||||||
Enum.find(Map.values(nodes_map), fn node ->
|
|
||||||
# Ensure the map is an AST node before checking its type
|
|
||||||
Map.has_key?(node, :ast_node_type) && node.ast_node_type == :file
|
|
||||||
end)
|
|
||||||
|
|
||||||
refute is_nil(file_node), "File node not found in #{inspect(nodes_map)}"
|
|
||||||
file_node
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_node_by_id(nodes_map, node_id) do
|
|
||||||
node = Map.get(nodes_map, node_id)
|
|
||||||
refute is_nil(node), "Node with ID #{inspect(node_id)} not found in nodes_map."
|
|
||||||
node
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_first_child_node(nodes_map, parent_node_id \\ nil) do
|
|
||||||
parent_node =
|
|
||||||
if is_nil(parent_node_id) do
|
|
||||||
get_file_node_from_map(nodes_map)
|
|
||||||
else
|
|
||||||
get_node_by_id(nodes_map, parent_node_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
children_ids = Map.get(parent_node, :children, [])
|
|
||||||
|
|
||||||
if Enum.empty?(children_ids) do
|
|
||||||
flunk("Parent node #{parent_node.id} has no children, cannot get first child node.")
|
|
||||||
end
|
|
||||||
|
|
||||||
get_node_by_id(nodes_map, hd(children_ids))
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_nth_child_node(nodes_map, index, parent_node_id \\ nil) do
|
|
||||||
parent_node =
|
|
||||||
if is_nil(parent_node_id) do
|
|
||||||
get_file_node_from_map(nodes_map)
|
|
||||||
else
|
|
||||||
get_node_by_id(nodes_map, parent_node_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
children_ids = Map.get(parent_node, :children, [])
|
|
||||||
|
|
||||||
unless index >= 0 && index < length(children_ids) do
|
|
||||||
flunk(
|
|
||||||
"Child node at index #{index} not found for parent #{parent_node.id}. Parent has #{length(children_ids)} children. Children IDs: #{inspect(children_ids)}"
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
get_node_by_id(nodes_map, Enum.at(children_ids, index))
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Combined Parse/TypeCheck and Get Node Helpers ---
|
|
||||||
# These return {node, nodes_map}
|
|
||||||
|
|
||||||
def parse_and_get_first_node(source_string, file_name \\ "test_source") do
|
|
||||||
{:ok, parsed_nodes_map} = Parser.parse(source_string, file_name)
|
|
||||||
{get_first_child_node(parsed_nodes_map), parsed_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
def parse_and_get_nth_node(source_string, index, file_name \\ "test_source") do
|
|
||||||
{:ok, parsed_nodes_map} = Parser.parse(source_string, file_name)
|
|
||||||
{get_nth_child_node(parsed_nodes_map, index), parsed_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
def typecheck_and_get_first_node(source_string, file_name \\ "test_source") do
|
|
||||||
{:ok, parsed_nodes_map} = Parser.parse(source_string, file_name)
|
|
||||||
{:ok, typed_nodes_map} = Typer.type_check(parsed_nodes_map)
|
|
||||||
{get_first_child_node(typed_nodes_map), typed_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
def typecheck_and_get_nth_node(source_string, index, file_name \\ "test_source") do
|
|
||||||
{:ok, parsed_nodes_map} = Parser.parse(source_string, file_name)
|
|
||||||
{:ok, typed_nodes_map} = Typer.type_check(parsed_nodes_map)
|
|
||||||
{get_nth_child_node(typed_nodes_map, index), typed_nodes_map}
|
|
||||||
end
|
|
||||||
|
|
||||||
# --- Type Assertion Helpers ---
|
|
||||||
|
|
||||||
# Strips :id fields recursively and resolves _id suffixed keys
|
|
||||||
# from a type definition structure, using nodes_map for lookups.
|
|
||||||
def deep_strip_id(type_definition, nodes_map) do
|
|
||||||
cond do
|
|
||||||
is_struct(type_definition, MapSet) ->
|
|
||||||
type_definition
|
|
||||||
|> MapSet.to_list()
|
|
||||||
# Recursively call on elements
|
|
||||||
|> Enum.map(&deep_strip_id(&1, nodes_map))
|
|
||||||
|> MapSet.new()
|
|
||||||
|
|
||||||
is_map(type_definition) ->
|
|
||||||
# Remove its own :id.
|
|
||||||
map_without_id = Map.delete(type_definition, :id)
|
|
||||||
|
|
||||||
# Recursively process its values, resolving _id keys.
|
|
||||||
# Special handling for type_annotation_mismatch error details
|
|
||||||
# Check if this map is a type definition itself (has :type_kind) before specific error checks
|
|
||||||
if Map.has_key?(type_definition, :type_kind) &&
|
|
||||||
type_definition.type_kind == :error &&
|
|
||||||
type_definition.reason == :type_annotation_mismatch do
|
|
||||||
actual_type_clean =
|
|
||||||
case Map.get(type_definition, :actual_type_id) do
|
|
||||||
nil -> nil
|
|
||||||
id -> deep_strip_id(Map.get(nodes_map, id), nodes_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
expected_type_clean =
|
|
||||||
case Map.get(type_definition, :expected_type_id) do
|
|
||||||
nil -> nil
|
|
||||||
id -> deep_strip_id(Map.get(nodes_map, id), nodes_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :type_annotation_mismatch,
|
|
||||||
actual_type: actual_type_clean,
|
|
||||||
expected_type: expected_type_clean
|
|
||||||
}
|
|
||||||
else
|
|
||||||
# Standard processing for other maps
|
|
||||||
Enum.reduce(map_without_id, %{}, fn {original_key, original_value}, acc_map ->
|
|
||||||
{final_key, final_value} =
|
|
||||||
cond do
|
|
||||||
# Handle :element_type_id -> :element_type
|
|
||||||
original_key == :element_type_id && is_atom(original_value) ->
|
|
||||||
resolved_def = Map.get(nodes_map, original_value)
|
|
||||||
{:element_type, deep_strip_id(resolved_def, nodes_map)}
|
|
||||||
|
|
||||||
# Handle :key_type_id -> :key_type
|
|
||||||
original_key == :key_type_id && is_atom(original_value) ->
|
|
||||||
resolved_def = Map.get(nodes_map, original_value)
|
|
||||||
{:key_type, deep_strip_id(resolved_def, nodes_map)}
|
|
||||||
|
|
||||||
# Handle :value_type_id -> :value_type (e.g., in index_signature or %{value_type_id: ..., optional: ...})
|
|
||||||
original_key == :value_type_id && is_atom(original_value) ->
|
|
||||||
resolved_def = Map.get(nodes_map, original_value)
|
|
||||||
{:value_type, deep_strip_id(resolved_def, nodes_map)}
|
|
||||||
|
|
||||||
is_map(original_value) ->
|
|
||||||
{original_key, deep_strip_id(original_value, nodes_map)}
|
|
||||||
|
|
||||||
is_list(original_value) ->
|
|
||||||
{original_key, deep_strip_id(original_value, nodes_map)} # Handles lists of type defs
|
|
||||||
|
|
||||||
true ->
|
|
||||||
{original_key, original_value}
|
|
||||||
end
|
|
||||||
|
|
||||||
Map.put(acc_map, final_key, final_value)
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
is_list(type_definition) ->
|
|
||||||
# Recursively call on elements for lists of type definitions
|
|
||||||
Enum.map(type_definition, &deep_strip_id(&1, nodes_map))
|
|
||||||
|
|
||||||
true -> # Literals, atoms, numbers, nil, etc. (leaf nodes in the type structure)
|
|
||||||
type_definition
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def assert_node_typed_as(node, typed_nodes_map, expected_type_definition_clean) do
|
|
||||||
type_key = node.type_id
|
|
||||||
|
|
||||||
assert is_atom(type_key),
|
|
||||||
"Type ID for node #{inspect(node.id)} (#{node.raw_string}) should be an atom key, got: #{inspect(type_key)}"
|
|
||||||
|
|
||||||
actual_type_definition_from_map = Map.get(typed_nodes_map, type_key)
|
|
||||||
|
|
||||||
refute is_nil(actual_type_definition_from_map),
|
|
||||||
"Type definition for key #{inspect(type_key)} (from node #{node.id}, raw: '#{node.raw_string}') not found in nodes_map."
|
|
||||||
|
|
||||||
actual_type_definition_clean =
|
|
||||||
deep_strip_id(actual_type_definition_from_map, typed_nodes_map)
|
|
||||||
|
|
||||||
assert actual_type_definition_clean == expected_type_definition_clean,
|
|
||||||
"Type mismatch for node #{node.id} ('#{node.raw_string}').\nExpected (clean):\n#{inspect(expected_type_definition_clean, pretty: true, limit: :infinity)}\nGot (clean):\n#{inspect(actual_type_definition_clean, pretty: true, limit: :infinity)}"
|
|
||||||
end
|
|
||||||
|
|
||||||
def inspect_nodes(node_map) do
|
|
||||||
AstUtils.build_debug_ast_data(node_map)
|
|
||||||
|> IO.inspect(pretty: true, limit: :infinity)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1 +0,0 @@
|
|||||||
ExUnit.start()
|
|
||||||
@ -1,38 +0,0 @@
|
|||||||
defmodule Til.AdhocTest do
|
|
||||||
@moduledoc """
|
|
||||||
Adhoc tests for quick syntax checking and compiler features.
|
|
||||||
These tests are not part of the main test suite and are used for
|
|
||||||
quick manual checks.
|
|
||||||
"""
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
describe "Adhoc tests for quick syntax checking and compiler features" do
|
|
||||||
# test "pretty_print_ast with nested structures and errors" do
|
|
||||||
# source = """
|
|
||||||
# (defun my_func [a b]
|
|
||||||
# (add a m{key 'val})
|
|
||||||
# (some_list 1 2
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {:ok, nodes_map} =
|
|
||||||
# Parser.parse(source, "adhoc_error_test.til")
|
|
||||||
#
|
|
||||||
# AstUtils.pretty_print_ast(nodes_map)
|
|
||||||
# |> IO.puts()
|
|
||||||
#
|
|
||||||
# AstUtils.build_debug_ast_data(nodes_map)
|
|
||||||
# |> IO.inspect(label: "AST Nodes", pretty: true, limit: :infinity)
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
|
|
||||||
test "asdasd" do
|
|
||||||
source = """
|
|
||||||
(defun :asd)
|
|
||||||
"""
|
|
||||||
|
|
||||||
TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# |> IO.inspect(label: "First Node", pretty: true, limit: :infinity)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,21 +0,0 @@
|
|||||||
defmodule Til.FileParseTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.AstUtils
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
describe "test File parsing features" do
|
|
||||||
test "parse multiple top-level s-exps into 1 `file` ast node" do
|
|
||||||
source = """
|
|
||||||
(defun)
|
|
||||||
(defun)
|
|
||||||
"""
|
|
||||||
|
|
||||||
{:ok, nodes_map} =
|
|
||||||
Parser.parse(source, "top-level-sexps.til")
|
|
||||||
|
|
||||||
assert TestHelpers.get_file_node_from_map(nodes_map).ast_node_type == :file
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,167 +0,0 @@
|
|||||||
defmodule Til.ListParserTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
alias Til.Parser
|
|
||||||
import Til.TestHelpers
|
|
||||||
|
|
||||||
describe "List Parsing" do
|
|
||||||
test "parses an empty list []" do
|
|
||||||
source = "[]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
list_node_id = hd(file_node.children)
|
|
||||||
list_node = Map.get(nodes_map, list_node_id)
|
|
||||||
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.raw_string == "[]"
|
|
||||||
assert list_node.children == []
|
|
||||||
assert list_node.parsing_error == nil
|
|
||||||
|
|
||||||
# Location check
|
|
||||||
# "[]"
|
|
||||||
# ^ offset 0, line 1, col 1
|
|
||||||
# ^ offset 1, line 1, col 2
|
|
||||||
# ^ offset 2, line 1, col 3 (end position, exclusive for offset, inclusive for col)
|
|
||||||
assert list_node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
|
|
||||||
# file_node is already fetched and used to get list_node
|
|
||||||
assert file_node.children == [list_node.id]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a list of integers [1 2 3]" do
|
|
||||||
source = "[1 2 3]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
list_node_id = hd(file_node.children)
|
|
||||||
list_node = Map.get(nodes_map, list_node_id)
|
|
||||||
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.raw_string == "[1 2 3]"
|
|
||||||
assert list_node.parsing_error == nil
|
|
||||||
assert length(list_node.children) == 3
|
|
||||||
|
|
||||||
# Location check
|
|
||||||
# "[1 2 3]"
|
|
||||||
# ^ offset 0, line 1, col 1
|
|
||||||
# ^ offset 7, line 1, col 8
|
|
||||||
assert list_node.location == [0, 1, 1, 7, 1, 8]
|
|
||||||
|
|
||||||
# Check children
|
|
||||||
child1_id = Enum.at(list_node.children, 0)
|
|
||||||
child2_id = Enum.at(list_node.children, 1)
|
|
||||||
child3_id = Enum.at(list_node.children, 2)
|
|
||||||
|
|
||||||
child1 = Map.get(nodes_map, child1_id)
|
|
||||||
child2 = Map.get(nodes_map, child2_id)
|
|
||||||
child3 = Map.get(nodes_map, child3_id)
|
|
||||||
|
|
||||||
assert child1.ast_node_type == :literal_integer
|
|
||||||
assert child1.value == 1
|
|
||||||
assert child1.raw_string == "1"
|
|
||||||
assert child1.parent_id == list_node.id
|
|
||||||
# "[1 2 3]"
|
|
||||||
# ^ offset 1, line 1, col 2
|
|
||||||
# ^ offset 2, line 1, col 3
|
|
||||||
assert child1.location == [1, 1, 2, 2, 1, 3]
|
|
||||||
|
|
||||||
assert child2.ast_node_type == :literal_integer
|
|
||||||
assert child2.value == 2
|
|
||||||
assert child2.raw_string == "2"
|
|
||||||
assert child2.parent_id == list_node.id
|
|
||||||
# "[1 2 3]"
|
|
||||||
# ^ offset 3, line 1, col 4
|
|
||||||
# ^ offset 4, line 1, col 5
|
|
||||||
assert child2.location == [3, 1, 4, 4, 1, 5]
|
|
||||||
|
|
||||||
assert child3.ast_node_type == :literal_integer
|
|
||||||
assert child3.value == 3
|
|
||||||
assert child3.raw_string == "3"
|
|
||||||
assert child3.parent_id == list_node.id
|
|
||||||
# "[1 2 3]"
|
|
||||||
# ^ offset 5, line 1, col 6
|
|
||||||
# ^ offset 6, line 1, col 7
|
|
||||||
assert child3.location == [5, 1, 6, 6, 1, 7]
|
|
||||||
|
|
||||||
# file_node is already fetched and used to get list_node
|
|
||||||
assert file_node.children == [list_node.id]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an unclosed list [1 2" do
|
|
||||||
source = "[1 2"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
list_node_id = hd(file_node.children)
|
|
||||||
list_node = Map.get(nodes_map, list_node_id)
|
|
||||||
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.raw_string == "[1 2" # Raw string is what was consumed for the list
|
|
||||||
assert list_node.parsing_error == "Unclosed list"
|
|
||||||
assert length(list_node.children) == 2 # Children that were successfully parsed
|
|
||||||
|
|
||||||
# Location check for the unclosed list node
|
|
||||||
# "[1 2"
|
|
||||||
# ^ offset 0, line 1, col 1
|
|
||||||
# ^ offset 4, line 1, col 5 (end of consumed input for this node)
|
|
||||||
assert list_node.location == [0, 1, 1, 4, 1, 5]
|
|
||||||
|
|
||||||
child1 = get_node_by_id(nodes_map, Enum.at(list_node.children, 0))
|
|
||||||
child2 = get_node_by_id(nodes_map, Enum.at(list_node.children, 1))
|
|
||||||
|
|
||||||
assert child1.value == 1
|
|
||||||
assert child2.value == 2
|
|
||||||
|
|
||||||
# file_node is already fetched and used to get list_node
|
|
||||||
assert file_node.children == [list_node.id]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an unexpected closing bracket ] at top level" do
|
|
||||||
source = "]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
error_node_id = hd(file_node.children)
|
|
||||||
error_node = Map.get(nodes_map, error_node_id)
|
|
||||||
|
|
||||||
assert error_node.ast_node_type == :unknown # Or a more specific error type if desired
|
|
||||||
assert error_node.raw_string == "]"
|
|
||||||
assert error_node.parsing_error == "Unexpected ']'"
|
|
||||||
|
|
||||||
# Location check
|
|
||||||
# "]"
|
|
||||||
# ^ offset 0, line 1, col 1
|
|
||||||
# ^ offset 1, line 1, col 2
|
|
||||||
assert error_node.location == [0, 1, 1, 1, 1, 2]
|
|
||||||
|
|
||||||
# file_node is already fetched and used to get error_node
|
|
||||||
assert file_node.children == [error_node.id]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an unexpected closing bracket ] inside S-expression (foo ])" do
|
|
||||||
source = "(foo ])"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
sexpr_node_id = hd(file_node.children) # S-expression is the top-level
|
|
||||||
sexpr_node = Map.get(nodes_map, sexpr_node_id)
|
|
||||||
|
|
||||||
assert sexpr_node.ast_node_type == :s_expression
|
|
||||||
assert sexpr_node.raw_string == "(foo ])"
|
|
||||||
assert sexpr_node.parsing_error == nil # The S-expression itself is not unclosed
|
|
||||||
assert length(sexpr_node.children) == 2 # 'foo' and the error node for ']'
|
|
||||||
|
|
||||||
# First child 'foo'
|
|
||||||
foo_node = get_node_by_id(nodes_map, Enum.at(sexpr_node.children, 0))
|
|
||||||
assert foo_node.ast_node_type == :symbol
|
|
||||||
assert foo_node.name == "foo"
|
|
||||||
|
|
||||||
# Second child is the error node for ']'
|
|
||||||
error_node = get_node_by_id(nodes_map, Enum.at(sexpr_node.children, 1))
|
|
||||||
assert error_node.ast_node_type == :unknown
|
|
||||||
assert error_node.raw_string == "]"
|
|
||||||
assert error_node.parsing_error == "Unexpected ']'"
|
|
||||||
assert error_node.parent_id == sexpr_node.id
|
|
||||||
# Location check for ']'
|
|
||||||
# "(foo ])"
|
|
||||||
# ^ offset 5, line 1, col 6
|
|
||||||
# ^ offset 6, line 1, col 7
|
|
||||||
assert error_node.location == [5, 1, 6, 6, 1, 7]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,241 +0,0 @@
|
|||||||
defmodule Til.MapParserTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
describe "parse/2 - Map Expressions" do
|
|
||||||
test "parses an empty map" do
|
|
||||||
source = "m{}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + map_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
map_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
assert map_node.children == []
|
|
||||||
# "m{}"
|
|
||||||
assert map_node.location == [0, 1, 1, 3, 1, 4]
|
|
||||||
assert map_node.raw_string == "m{}"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple map with symbol keys and values" do
|
|
||||||
source = "m{key1 val1 key2 val2}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node + 1 map_expression node + 4 symbol nodes = 6 nodes
|
|
||||||
assert map_size(nodes_map) == 6
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
map_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(map_node)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
assert length(map_node.children) == 4
|
|
||||||
|
|
||||||
children_nodes = Enum.map(map_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.name) == ["key1", "val1", "key2", "val2"]
|
|
||||||
|
|
||||||
Enum.each(children_nodes, fn child ->
|
|
||||||
assert child.parent_id == map_node.id
|
|
||||||
assert child.ast_node_type == :symbol
|
|
||||||
end)
|
|
||||||
|
|
||||||
# "m{key1 val1 key2 val2}"
|
|
||||||
assert map_node.location == [0, 1, 1, 22, 1, 23]
|
|
||||||
assert map_node.raw_string == "m{key1 val1 key2 val2}"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a map with mixed type values" do
|
|
||||||
source = "m{s 'a string' i 123 sym value}"
|
|
||||||
{map_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
|
|
||||||
# 1 file_node + 1 map, 3 keys (symbols), 1 string val, 1 int val, 1 symbol val = 1 + 1 + 3 + 3 = 8 nodes
|
|
||||||
assert map_size(nodes_map) == 8
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(map_node)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
assert length(map_node.children) == 6
|
|
||||||
|
|
||||||
children_nodes = Enum.map(map_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
# s
|
|
||||||
assert Enum.at(children_nodes, 0).ast_node_type == :symbol
|
|
||||||
assert Enum.at(children_nodes, 0).name == "s"
|
|
||||||
# 'a string'
|
|
||||||
assert Enum.at(children_nodes, 1).ast_node_type == :literal_string
|
|
||||||
assert Enum.at(children_nodes, 1).value == "a string"
|
|
||||||
# i
|
|
||||||
assert Enum.at(children_nodes, 2).ast_node_type == :symbol
|
|
||||||
assert Enum.at(children_nodes, 2).name == "i"
|
|
||||||
# 123
|
|
||||||
assert Enum.at(children_nodes, 3).ast_node_type == :literal_integer
|
|
||||||
assert Enum.at(children_nodes, 3).value == 123
|
|
||||||
# sym
|
|
||||||
assert Enum.at(children_nodes, 4).ast_node_type == :symbol
|
|
||||||
assert Enum.at(children_nodes, 4).name == "sym"
|
|
||||||
# value
|
|
||||||
assert Enum.at(children_nodes, 5).ast_node_type == :symbol
|
|
||||||
assert Enum.at(children_nodes, 5).name == "value"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses nested maps" do
|
|
||||||
source = "m{outer_key m{inner_key inner_val}}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# Nodes: 1 file_node, outer_map, outer_key, inner_map, inner_key, inner_val => 6 nodes
|
|
||||||
assert map_size(nodes_map) == 6
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
outer_map = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(outer_map)
|
|
||||||
assert outer_map.ast_node_type == :map_expression
|
|
||||||
assert outer_map.parent_id == file_node.id
|
|
||||||
# outer_key, inner_map
|
|
||||||
assert length(outer_map.children) == 2
|
|
||||||
|
|
||||||
outer_key_node = TestHelpers.get_nth_child_node(nodes_map, 0, outer_map.id)
|
|
||||||
inner_map_node = TestHelpers.get_nth_child_node(nodes_map, 1, outer_map.id)
|
|
||||||
|
|
||||||
assert outer_key_node.ast_node_type == :symbol
|
|
||||||
assert outer_key_node.name == "outer_key"
|
|
||||||
assert inner_map_node.ast_node_type == :map_expression
|
|
||||||
assert inner_map_node.parent_id == outer_map.id
|
|
||||||
# inner_key, inner_val
|
|
||||||
assert length(inner_map_node.children) == 2
|
|
||||||
|
|
||||||
inner_key_node = TestHelpers.get_nth_child_node(nodes_map, 0, inner_map_node.id)
|
|
||||||
inner_val_node = TestHelpers.get_nth_child_node(nodes_map, 1, inner_map_node.id)
|
|
||||||
|
|
||||||
assert inner_key_node.ast_node_type == :symbol
|
|
||||||
assert inner_key_node.name == "inner_key"
|
|
||||||
assert inner_val_node.ast_node_type == :symbol
|
|
||||||
assert inner_val_node.name == "inner_val"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses map with varied spacing" do
|
|
||||||
source = "m{ key1 val1\n key2 val2 }"
|
|
||||||
{map_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(map_node)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
assert length(map_node.children) == 4
|
|
||||||
|
|
||||||
children_names_values =
|
|
||||||
Enum.map(map_node.children, fn id ->
|
|
||||||
node = TestHelpers.get_node_by_id(nodes_map, id)
|
|
||||||
if node.ast_node_type == :symbol, do: node.name, else: node.value
|
|
||||||
end)
|
|
||||||
|
|
||||||
assert children_names_values == ["key1", "val1", "key2", "val2"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unclosed map" do
|
|
||||||
source = "m{key1 val1"
|
|
||||||
{map_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# Expect 1 file_node, 1 map_expression node (error), 2 symbol nodes = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(map_node)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
assert map_node.parsing_error == "Unclosed map"
|
|
||||||
# key1, val1
|
|
||||||
assert length(map_node.children) == 2
|
|
||||||
# "m{key1 val1"
|
|
||||||
assert map_node.location == [0, 1, 1, 11, 1, 12]
|
|
||||||
assert map_node.raw_string == "m{key1 val1"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unexpected closing curly brace at top level (not map specific, but related)" do
|
|
||||||
# This } is not part of m{}
|
|
||||||
source = "foo } bar"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node + 3 items
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
top_level_children = Enum.map(file_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
error_node =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :unknown && &1.raw_string == "}"))
|
|
||||||
|
|
||||||
refute is_nil(error_node)
|
|
||||||
assert error_node.parent_id == file_node.id
|
|
||||||
assert error_node.parsing_error == "Unexpected '}'"
|
|
||||||
# location of "}"
|
|
||||||
assert error_node.location == [4, 1, 5, 5, 1, 6]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses map with odd number of elements (parser accepts, semantic check later)" do
|
|
||||||
source = "m{key1 val1 key2}"
|
|
||||||
{map_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(map_node)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == file_node.id
|
|
||||||
# key1, val1, key2
|
|
||||||
assert length(map_node.children) == 3
|
|
||||||
children_nodes = Enum.map(map_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.name) == ["key1", "val1", "key2"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map within an S-expression" do
|
|
||||||
source = "(do-something m{a 1 b 2})"
|
|
||||||
{s_expr_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# 1 file_node, s-expr, do-something, map, a, 1, b, 2 => 8 nodes
|
|
||||||
assert map_size(nodes_map) == 8
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
# do-something, map_node
|
|
||||||
assert length(s_expr_node.children) == 2
|
|
||||||
|
|
||||||
map_node = TestHelpers.get_nth_child_node(nodes_map, 1, s_expr_node.id)
|
|
||||||
assert map_node.ast_node_type == :map_expression
|
|
||||||
assert map_node.parent_id == s_expr_node.id
|
|
||||||
# a, 1, b, 2
|
|
||||||
assert length(map_node.children) == 4
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map as a value in another map" do
|
|
||||||
source = "m{data m{x 10 y 20}}"
|
|
||||||
{outer_map_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
|
|
||||||
# 1 file_node, outer_map, data_symbol, inner_map, x_symbol, 10_int, y_symbol, 20_int => 8 nodes
|
|
||||||
assert map_size(nodes_map) == 8
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(outer_map_node)
|
|
||||||
assert outer_map_node.ast_node_type == :map_expression
|
|
||||||
assert outer_map_node.parent_id == file_node.id
|
|
||||||
# data_symbol, inner_map_node
|
|
||||||
assert length(outer_map_node.children) == 2
|
|
||||||
|
|
||||||
inner_map_node = TestHelpers.get_nth_child_node(nodes_map, 1, outer_map_node.id)
|
|
||||||
assert inner_map_node.ast_node_type == :map_expression
|
|
||||||
assert inner_map_node.parent_id == outer_map_node.id
|
|
||||||
# x, 10, y, 20
|
|
||||||
assert length(inner_map_node.children) == 4
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,149 +0,0 @@
|
|||||||
defmodule Til.ParseAtomTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Til.Parser
|
|
||||||
import Til.TestHelpers
|
|
||||||
|
|
||||||
describe "Atom parsing" do
|
|
||||||
test "parses a simple atom" do
|
|
||||||
source = ":hello"
|
|
||||||
{:ok, _nodes_map} = Parser.parse(source)
|
|
||||||
{atom_node, _map} = parse_and_get_first_node(source)
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :hello
|
|
||||||
assert atom_node.raw_string == ":hello"
|
|
||||||
assert atom_node.location == [0, 1, 1, 6, 1, 7]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an atom with numbers and underscores" do
|
|
||||||
source = ":foo_123_bar"
|
|
||||||
{:ok, _nodes_map} = Parser.parse(source)
|
|
||||||
{atom_node, _map} = parse_and_get_first_node(source)
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :foo_123_bar
|
|
||||||
assert atom_node.raw_string == ":foo_123_bar"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an atom within an s-expression" do
|
|
||||||
source = "(:an_atom)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
s_expr_node = get_first_child_node(nodes_map)
|
|
||||||
atom_node_id = hd(s_expr_node.children)
|
|
||||||
atom_node = get_node_by_id(nodes_map, atom_node_id)
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :an_atom
|
|
||||||
assert atom_node.raw_string == ":an_atom"
|
|
||||||
# Location of :an_atom within ()
|
|
||||||
assert atom_node.location == [1, 1, 2, 9, 1, 10]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses multiple atoms in an s-expression" do
|
|
||||||
source = "(:first :second)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
s_expr_node = get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
first_atom_node = get_node_by_id(nodes_map, Enum.at(s_expr_node.children, 0))
|
|
||||||
assert first_atom_node.ast_node_type == :literal_atom
|
|
||||||
assert first_atom_node.value == :first
|
|
||||||
assert first_atom_node.raw_string == ":first"
|
|
||||||
|
|
||||||
second_atom_node = get_node_by_id(nodes_map, Enum.at(s_expr_node.children, 1))
|
|
||||||
assert second_atom_node.ast_node_type == :literal_atom
|
|
||||||
assert second_atom_node.value == :second
|
|
||||||
assert second_atom_node.raw_string == ":second"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an atom followed immediately by an opening parenthesis (delimiter)" do
|
|
||||||
source = ":atom_name(foo)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
|
|
||||||
# First child of the file node should be the atom
|
|
||||||
atom_node = get_nth_child_node(nodes_map, 0)
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :atom_name
|
|
||||||
assert atom_node.raw_string == ":atom_name"
|
|
||||||
assert atom_node.location == [0, 1, 1, 10, 1, 11]
|
|
||||||
|
|
||||||
# Second child should be the s-expression
|
|
||||||
s_expr_node = get_nth_child_node(nodes_map, 1)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.raw_string == "(foo)"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an atom at the end of input" do
|
|
||||||
source = " :last_atom "
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# Use trimmed for helper
|
|
||||||
{atom_node, _map} = parse_and_get_first_node(String.trim(source))
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :last_atom
|
|
||||||
assert atom_node.raw_string == ":last_atom"
|
|
||||||
# Location needs to be checked against the original source with whitespace
|
|
||||||
file_node = get_file_node_from_map(nodes_map)
|
|
||||||
actual_atom_node_id = hd(file_node.children)
|
|
||||||
actual_atom_node = get_node_by_id(nodes_map, actual_atom_node_id)
|
|
||||||
# " :last_atom "
|
|
||||||
assert actual_atom_node.location == [2, 1, 3, 12, 1, 13]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses atom within a list expression" do
|
|
||||||
source = "[:my_list_atom]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
list_expr_node = get_first_child_node(nodes_map)
|
|
||||||
atom_node_id = hd(list_expr_node.children)
|
|
||||||
atom_node = get_node_by_id(nodes_map, atom_node_id)
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :my_list_atom
|
|
||||||
assert atom_node.raw_string == ":my_list_atom"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses atom within a tuple expression" do
|
|
||||||
source = "{:my_tuple_atom}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
tuple_expr_node = get_first_child_node(nodes_map)
|
|
||||||
atom_node_id = hd(tuple_expr_node.children)
|
|
||||||
atom_node = get_node_by_id(nodes_map, atom_node_id)
|
|
||||||
|
|
||||||
assert atom_node.ast_node_type == :literal_atom
|
|
||||||
assert atom_node.value == :my_tuple_atom
|
|
||||||
assert atom_node.raw_string == ":my_tuple_atom"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses atom as a key in a map expression" do
|
|
||||||
source = "m{:key 1}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
map_expr_node = get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
key_node_id = Enum.at(map_expr_node.children, 0)
|
|
||||||
key_node = get_node_by_id(nodes_map, key_node_id)
|
|
||||||
|
|
||||||
assert key_node.ast_node_type == :literal_atom
|
|
||||||
assert key_node.value == :key
|
|
||||||
assert key_node.raw_string == ":key"
|
|
||||||
|
|
||||||
value_node_id = Enum.at(map_expr_node.children, 1)
|
|
||||||
value_node = get_node_by_id(nodes_map, value_node_id)
|
|
||||||
assert value_node.ast_node_type == :literal_integer
|
|
||||||
assert value_node.value == 1
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses atom as a value in a map expression" do
|
|
||||||
source = "m{'string_key' :atom_value}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
map_expr_node = get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
# string_key_node is child 0
|
|
||||||
value_node_id = Enum.at(map_expr_node.children, 1)
|
|
||||||
value_node = get_node_by_id(nodes_map, value_node_id)
|
|
||||||
|
|
||||||
assert value_node.ast_node_type == :literal_atom
|
|
||||||
assert value_node.value == :atom_value
|
|
||||||
assert value_node.raw_string == ":atom_value"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,868 +0,0 @@
|
|||||||
defmodule Til.ParserTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
describe "parse/2 - Basic Atoms" do
|
|
||||||
test "parses a simple integer literal" do
|
|
||||||
source = "42"
|
|
||||||
file_name = "test.tly"
|
|
||||||
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source, file_name)
|
|
||||||
# file_node + integer_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
# Still need file_node for parent_id check
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert is_integer(node.id)
|
|
||||||
assert node.type_id == nil
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.file == file_name
|
|
||||||
# "42"
|
|
||||||
assert node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
assert node.raw_string == source
|
|
||||||
assert node.ast_node_type == :literal_integer
|
|
||||||
assert node.value == 42
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a negative integer literal" do
|
|
||||||
source = "-123"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# file_node + integer_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
# "-123"
|
|
||||||
assert node.location == [0, 1, 1, 4, 1, 5]
|
|
||||||
assert node.raw_string == source
|
|
||||||
assert node.ast_node_type == :literal_integer
|
|
||||||
assert node.value == -123
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple symbol" do
|
|
||||||
source = "foo"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# file_node + symbol_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
# "foo"
|
|
||||||
assert node.location == [0, 1, 1, 3, 1, 4]
|
|
||||||
assert node.raw_string == source
|
|
||||||
assert node.ast_node_type == :symbol
|
|
||||||
assert node.name == "foo"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a symbol with hyphens and numbers" do
|
|
||||||
source = "my-var-123"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.raw_string == source
|
|
||||||
assert node.ast_node_type == :symbol
|
|
||||||
assert node.name == "my-var-123"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses operator-like symbols" do
|
|
||||||
source = "+"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.raw_string == source
|
|
||||||
assert node.ast_node_type == :symbol
|
|
||||||
assert node.name == "+"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a sequence of integers and symbols" do
|
|
||||||
source = "10 foo -20 bar+"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 4 items + 1 file_node
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
# Children are already sorted by the parser
|
|
||||||
[n1_id, n2_id, n3_id, n4_id] = file_node.children
|
|
||||||
n1 = TestHelpers.get_node_by_id(nodes_map, n1_id)
|
|
||||||
n2 = TestHelpers.get_node_by_id(nodes_map, n2_id)
|
|
||||||
n3 = TestHelpers.get_node_by_id(nodes_map, n3_id)
|
|
||||||
n4 = TestHelpers.get_node_by_id(nodes_map, n4_id)
|
|
||||||
|
|
||||||
assert n1.ast_node_type == :literal_integer
|
|
||||||
assert n1.value == 10
|
|
||||||
assert n1.raw_string == "10"
|
|
||||||
assert n1.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
assert n1.parent_id == file_node.id
|
|
||||||
|
|
||||||
assert n2.ast_node_type == :symbol
|
|
||||||
assert n2.name == "foo"
|
|
||||||
assert n2.raw_string == "foo"
|
|
||||||
# after "10 "
|
|
||||||
assert n2.location == [3, 1, 4, 6, 1, 7]
|
|
||||||
assert n2.parent_id == file_node.id
|
|
||||||
|
|
||||||
assert n3.ast_node_type == :literal_integer
|
|
||||||
assert n3.value == -20
|
|
||||||
assert n3.raw_string == "-20"
|
|
||||||
# after "foo "
|
|
||||||
assert n3.location == [7, 1, 8, 10, 1, 11]
|
|
||||||
assert n3.parent_id == file_node.id
|
|
||||||
|
|
||||||
assert n4.ast_node_type == :symbol
|
|
||||||
assert n4.name == "bar+"
|
|
||||||
assert n4.raw_string == "bar+"
|
|
||||||
# after "-20 "
|
|
||||||
assert n4.location == [11, 1, 12, 15, 1, 16]
|
|
||||||
assert n4.parent_id == file_node.id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "uses 'unknown' as default file_name" do
|
|
||||||
source = "7"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.file == "unknown"
|
|
||||||
assert file_node.file == "unknown"
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "parse/2 - S-expressions" do
|
|
||||||
test "parses an empty S-expression" do
|
|
||||||
source = "()"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + s_expr_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert s_expr_node.children == []
|
|
||||||
# "()"
|
|
||||||
assert s_expr_node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
|
|
||||||
# Raw string for S-expressions is tricky, current impl might be placeholder like "()" or "(...)"
|
|
||||||
# For now, let's assert it starts with ( and ends with ) if not placeholder
|
|
||||||
assert String.starts_with?(s_expr_node.raw_string, "(") &&
|
|
||||||
String.ends_with?(s_expr_node.raw_string, ")")
|
|
||||||
|
|
||||||
# if it's "()"
|
|
||||||
assert String.length(s_expr_node.raw_string) == 2
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple S-expression with integers" do
|
|
||||||
source = "(1 22 -3)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node + 1 S-expr node + 3 integer nodes = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert length(s_expr_node.children) == 3
|
|
||||||
|
|
||||||
children_nodes = Enum.map(s_expr_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(children_nodes, & &1.value) == [1, 22, -3]
|
|
||||||
|
|
||||||
Enum.each(children_nodes, fn child ->
|
|
||||||
assert child.parent_id == s_expr_node.id
|
|
||||||
assert child.ast_node_type == :literal_integer
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Source "(1 22 -3)" has length 9. Start offset 0, end offset 9. Start col 1, end col 10.
|
|
||||||
# "(1 22 -3)"
|
|
||||||
assert s_expr_node.location == [0, 1, 1, 9, 1, 10]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple S-expression with symbols" do
|
|
||||||
source = "(foo bar baz)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node, 1 s-expr, 3 symbols = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert length(s_expr_node.children) == 3
|
|
||||||
children_nodes = Enum.map(s_expr_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(children_nodes, & &1.name) == ["foo", "bar", "baz"]
|
|
||||||
|
|
||||||
Enum.each(children_nodes, fn child ->
|
|
||||||
assert child.parent_id == s_expr_node.id
|
|
||||||
assert child.ast_node_type == :symbol
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses nested S-expressions" do
|
|
||||||
# outer: a, (b 1), c | inner: b, 1
|
|
||||||
source = "(a (b 1) c)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# Nodes: 1 file_node, outer_s_expr, a, inner_s_expr, c, b, 1 => 7 nodes
|
|
||||||
assert map_size(nodes_map) == 7
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
outer_s_expr = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(outer_s_expr)
|
|
||||||
assert outer_s_expr.ast_node_type == :s_expression
|
|
||||||
assert outer_s_expr.parent_id == file_node.id
|
|
||||||
assert length(outer_s_expr.children) == 3
|
|
||||||
|
|
||||||
# 'a'
|
|
||||||
child1 = TestHelpers.get_nth_child_node(nodes_map, 0, outer_s_expr.id)
|
|
||||||
# '(b 1)'
|
|
||||||
inner_s_expr = TestHelpers.get_nth_child_node(nodes_map, 1, outer_s_expr.id)
|
|
||||||
# 'c'
|
|
||||||
child3 = TestHelpers.get_nth_child_node(nodes_map, 2, outer_s_expr.id)
|
|
||||||
|
|
||||||
assert child1.ast_node_type == :symbol && child1.name == "a"
|
|
||||||
assert inner_s_expr.ast_node_type == :s_expression
|
|
||||||
assert child3.ast_node_type == :symbol && child3.name == "c"
|
|
||||||
|
|
||||||
assert inner_s_expr.parent_id == outer_s_expr.id
|
|
||||||
assert length(inner_s_expr.children) == 2
|
|
||||||
# 'b'
|
|
||||||
grandchild1 = TestHelpers.get_nth_child_node(nodes_map, 0, inner_s_expr.id)
|
|
||||||
# '1'
|
|
||||||
grandchild2 = TestHelpers.get_nth_child_node(nodes_map, 1, inner_s_expr.id)
|
|
||||||
|
|
||||||
assert grandchild1.ast_node_type == :symbol && grandchild1.name == "b"
|
|
||||||
assert grandchild1.parent_id == inner_s_expr.id
|
|
||||||
assert grandchild2.ast_node_type == :literal_integer && grandchild2.value == 1
|
|
||||||
assert grandchild2.parent_id == inner_s_expr.id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses S-expressions with varied spacing" do
|
|
||||||
source = "( foo 1\nbar )"
|
|
||||||
{s_expr_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert length(s_expr_node.children) == 3
|
|
||||||
|
|
||||||
children_names_values =
|
|
||||||
Enum.map(s_expr_node.children, fn id ->
|
|
||||||
node = TestHelpers.get_node_by_id(nodes_map, id)
|
|
||||||
if node.ast_node_type == :symbol, do: node.name, else: node.value
|
|
||||||
end)
|
|
||||||
|
|
||||||
assert children_names_values == ["foo", 1, "bar"]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "parse/2 - Error Handling" do
|
|
||||||
test "handles unclosed S-expression" do
|
|
||||||
source = "(foo bar"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# Expect 1 file_node, 1 S-expr node (marked with error), 2 symbol nodes = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert s_expr_node.parsing_error == "Unclosed S-expression"
|
|
||||||
# foo, bar
|
|
||||||
assert length(s_expr_node.children) == 2
|
|
||||||
# Location should span till end of source
|
|
||||||
# "(foo bar"
|
|
||||||
assert s_expr_node.location == [0, 1, 1, 8, 1, 9]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unexpected closing parenthesis at top level" do
|
|
||||||
# foo, error_node_for_), bar
|
|
||||||
source = "foo ) bar"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 3 items + 1 file_node
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
top_level_children =
|
|
||||||
Enum.map(file_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
error_node =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :unknown && &1.raw_string == ")"))
|
|
||||||
|
|
||||||
refute is_nil(error_node)
|
|
||||||
assert error_node.parent_id == file_node.id
|
|
||||||
assert error_node.parsing_error == "Unexpected ')'"
|
|
||||||
# location of ")"
|
|
||||||
assert error_node.location == [4, 1, 5, 5, 1, 6]
|
|
||||||
|
|
||||||
symbol_foo =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "foo"))
|
|
||||||
|
|
||||||
symbol_bar =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "bar"))
|
|
||||||
|
|
||||||
refute is_nil(symbol_foo)
|
|
||||||
assert symbol_foo.parent_id == file_node.id
|
|
||||||
refute is_nil(symbol_bar)
|
|
||||||
assert symbol_bar.parent_id == file_node.id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unknown token inside S-expression (partial, basic)" do
|
|
||||||
source = "(foo 123"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
assert s_expr_node.parsing_error == "Unclosed S-expression"
|
|
||||||
assert length(s_expr_node.children) == 2
|
|
||||||
|
|
||||||
child1 = TestHelpers.get_nth_child_node(nodes_map, 0, s_expr_node.id)
|
|
||||||
child2 = TestHelpers.get_nth_child_node(nodes_map, 1, s_expr_node.id)
|
|
||||||
|
|
||||||
assert child1.name == "foo"
|
|
||||||
assert child2.value == 123
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses multiple top-level S-expressions" do
|
|
||||||
source = "() (1 2) (foo)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# () -> 1 node
|
|
||||||
# (1 2) -> 1 s-expr, 2 ints = 3 nodes
|
|
||||||
# (foo) -> 1 s-expr, 1 symbol = 2 nodes
|
|
||||||
# Total items = 1 + 3 + 2 = 6 nodes. Plus 1 file_node = 7 nodes.
|
|
||||||
assert map_size(nodes_map) == 7
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
top_level_s_expr_nodes =
|
|
||||||
file_node.children
|
|
||||||
|> Enum.map(&TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|> Enum.filter(&(&1.ast_node_type == :s_expression))
|
|
||||||
|
|
||||||
assert length(top_level_s_expr_nodes) == 3
|
|
||||||
Enum.each(top_level_s_expr_nodes, fn node -> assert node.parent_id == file_node.id end)
|
|
||||||
|
|
||||||
# Check children counts or specific content if necessary
|
|
||||||
# For example, the S-expression for (1 2)
|
|
||||||
s_expr_1_2 =
|
|
||||||
Enum.find(top_level_s_expr_nodes, fn node ->
|
|
||||||
children = Enum.map(node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
Enum.map(children, & &1.value) == [1, 2]
|
|
||||||
end)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_1_2)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Test for raw_string of S-expression (this is a known tricky part in the impl)
|
|
||||||
# The current implementation has a placeholder for S-expression raw_string.
|
|
||||||
# This test will likely fail or need adjustment based on how raw_string is actually captured.
|
|
||||||
# For now, I'll skip a very precise raw_string test for S-expressions until it's robustly implemented.
|
|
||||||
# test "S-expression raw_string is captured correctly" do
|
|
||||||
# source = "( add 1 2 )" # Note spaces
|
|
||||||
# {:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# s_expr_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :s_expression))
|
|
||||||
# assert s_expr_node.raw_string == source
|
|
||||||
# end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "parse/2 - String Literals" do
|
|
||||||
test "parses a simple single-line string" do
|
|
||||||
source = "'hello world'"
|
|
||||||
file_name = "test_str.tly"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source, file_name)
|
|
||||||
# file_node + string_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.value == "hello world"
|
|
||||||
assert node.raw_string == "'hello world'"
|
|
||||||
assert node.location == [0, 1, 1, 13, 1, 14]
|
|
||||||
assert node.file == file_name
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.type_id == nil
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses an empty string" do
|
|
||||||
source = "''"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + string_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.value == ""
|
|
||||||
assert node.raw_string == "''"
|
|
||||||
assert node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a multiline string with no initial indent" do
|
|
||||||
# ' is at col 1, strip 0 spaces
|
|
||||||
source = "'hello\nworld'"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.value == "hello\nworld"
|
|
||||||
assert node.raw_string == "'hello\nworld'"
|
|
||||||
assert node.location == [0, 1, 1, 13, 2, 7]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a multiline string with initial indent" do
|
|
||||||
# ' is at col 3, strip 2 spaces
|
|
||||||
source = " 'hello\n world\n end'"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.value == "hello\nworld\nend"
|
|
||||||
assert node.raw_string == "'hello\n world\n end'"
|
|
||||||
# Location of ' is [2,1,3]. Raw string length is 21.
|
|
||||||
# End location: offset 2+21=23. Line 3, Col 7.
|
|
||||||
assert node.location == [2, 1, 3, 23, 3, 7]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a multiline string with varied subsequent indents" do
|
|
||||||
# ' is at col 3, strip 2 spaces
|
|
||||||
source = " 'hello\n world\n more'"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
# " world" -> "world", " more" -> " more"
|
|
||||||
assert node.value == "hello\nworld\n more"
|
|
||||||
assert node.raw_string == "'hello\n world\n more'"
|
|
||||||
# Location of ' is [2,1,3]. Raw string length is 22.
|
|
||||||
# End location: offset 2+22=24. Line 3, Col 9.
|
|
||||||
assert node.location == [2, 1, 3, 24, 3, 9]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a string containing parentheses and other special chars" do
|
|
||||||
source = "' (foo) [bar] - 123 '"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.value == " (foo) [bar] - 123 "
|
|
||||||
assert node.raw_string == "' (foo) [bar] - 123 '"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a string within an S-expression" do
|
|
||||||
source = "('str' 1)"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node, s-expr, string, integer = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
s_expr_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(s_expr_node)
|
|
||||||
assert s_expr_node.ast_node_type == :s_expression
|
|
||||||
assert s_expr_node.parent_id == file_node.id
|
|
||||||
children_nodes = Enum.map(s_expr_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
string_node = Enum.find(children_nodes, &(&1.ast_node_type == :literal_string))
|
|
||||||
integer_node = Enum.find(children_nodes, &(&1.ast_node_type == :literal_integer))
|
|
||||||
|
|
||||||
refute is_nil(string_node)
|
|
||||||
refute is_nil(integer_node)
|
|
||||||
|
|
||||||
assert string_node.value == "str"
|
|
||||||
assert string_node.raw_string == "'str'"
|
|
||||||
assert string_node.parent_id == s_expr_node.id
|
|
||||||
|
|
||||||
# Location: `(` is [0,1,1]. ` ` is [1,1,2]. `s` is [2,1,3]. `t` is [3,1,4]. `r` is [4,1,5]. ` ` is [5,1,6].
|
|
||||||
# Token "'str'" starts at offset 1, line 1, col 2. Length 5. Ends offset 6, line 1, col 7.
|
|
||||||
assert string_node.location == [1, 1, 2, 6, 1, 7]
|
|
||||||
|
|
||||||
assert integer_node.value == 1
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unclosed string literal" do
|
|
||||||
source = "'abc"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + string_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.parsing_error == "Unclosed string literal"
|
|
||||||
# Content up to EOF, processed
|
|
||||||
assert node.value == "abc"
|
|
||||||
assert node.raw_string == "'abc"
|
|
||||||
# Location: ' is [0,1,1]. Raw string "'abc" length 4. Ends offset 4, line 1, col 5.
|
|
||||||
assert node.location == [0, 1, 1, 4, 1, 5]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unclosed string literal with newlines" do
|
|
||||||
# ' at col 3, strip 2
|
|
||||||
source = " 'hello\n world"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.parsing_error == "Unclosed string literal"
|
|
||||||
assert node.value == "hello\nworld"
|
|
||||||
assert node.raw_string == "'hello\n world"
|
|
||||||
# Location of ' is [2,1,3]. Raw string "'hello\n world" length 14.
|
|
||||||
# ' (2,1,3) -> (3,1,4)
|
|
||||||
# hello (3,1,4) -> (8,1,9)
|
|
||||||
# \n (8,1,9) -> (9,2,1)
|
|
||||||
# world (9,2,1) -> (16,2,8)
|
|
||||||
# End location: offset 2+14=16. Line 2, Col 8.
|
|
||||||
assert node.location == [2, 1, 3, 16, 2, 8]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "string with only newlines and spaces, respecting indent" do
|
|
||||||
# ' at col 3, strip 2
|
|
||||||
source = " '\n \n '"
|
|
||||||
# Content: "\n \n "
|
|
||||||
# Lines: ["", " ", " "]
|
|
||||||
# Processed: "", " " (strip 2 from " "), "" (strip 2 from " ")
|
|
||||||
# Value: "\n \n"
|
|
||||||
{node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
assert node.ast_node_type == :literal_string
|
|
||||||
assert node.parent_id == file_node.id
|
|
||||||
assert node.value == "\n \n"
|
|
||||||
assert node.raw_string == "'\n \n '"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "large s-expression parse test" do
|
|
||||||
# ' at col 3, strip 2
|
|
||||||
source = """
|
|
||||||
(defn my-function (x y)
|
|
||||||
(= asd 'first line
|
|
||||||
second line
|
|
||||||
third line
|
|
||||||
asdasd')
|
|
||||||
(+ x y))
|
|
||||||
"""
|
|
||||||
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
assert map_size(nodes_map) > 0
|
|
||||||
# IO.inspect(nodes_map, limit: :infinity)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "parse/2 - List Expressions" do
|
|
||||||
test "parses an empty list" do
|
|
||||||
source = "[]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + list_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
list_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.parent_id == file_node.id
|
|
||||||
assert list_node.children == []
|
|
||||||
# "[]"
|
|
||||||
assert list_node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
assert list_node.raw_string == "[]"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple list with integers" do
|
|
||||||
source = "[1 22 -3]"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node + 1 list_expression node + 3 integer nodes = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
list_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(list_node)
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.parent_id == file_node.id
|
|
||||||
assert length(list_node.children) == 3
|
|
||||||
|
|
||||||
children_nodes = Enum.map(list_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.value) == [1, 22, -3]
|
|
||||||
|
|
||||||
Enum.each(children_nodes, fn child ->
|
|
||||||
assert child.parent_id == list_node.id
|
|
||||||
assert child.ast_node_type == :literal_integer
|
|
||||||
end)
|
|
||||||
|
|
||||||
# "[1 22 -3]"
|
|
||||||
assert list_node.location == [0, 1, 1, 9, 1, 10]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple list with symbols" do
|
|
||||||
source = "[foo bar baz]"
|
|
||||||
{list_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# 1 file_node, 1 list_expr, 3 symbols = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(list_node)
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.parent_id == file_node.id
|
|
||||||
assert length(list_node.children) == 3
|
|
||||||
children_nodes = Enum.map(list_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.name) == ["foo", "bar", "baz"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses nested lists" do
|
|
||||||
source = "[a [b 1] c]"
|
|
||||||
{outer_list, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# Nodes: 1 file_node, outer_list, a, inner_list, c, b, 1 => 7 nodes
|
|
||||||
assert map_size(nodes_map) == 7
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(outer_list)
|
|
||||||
assert outer_list.ast_node_type == :list_expression
|
|
||||||
assert outer_list.parent_id == file_node.id
|
|
||||||
assert length(outer_list.children) == 3
|
|
||||||
|
|
||||||
# 'a'
|
|
||||||
child1 = TestHelpers.get_nth_child_node(nodes_map, 0, outer_list.id)
|
|
||||||
# '[b 1]'
|
|
||||||
inner_list = TestHelpers.get_nth_child_node(nodes_map, 1, outer_list.id)
|
|
||||||
# 'c'
|
|
||||||
child3 = TestHelpers.get_nth_child_node(nodes_map, 2, outer_list.id)
|
|
||||||
|
|
||||||
assert child1.ast_node_type == :symbol && child1.name == "a"
|
|
||||||
assert inner_list.ast_node_type == :list_expression
|
|
||||||
assert child3.ast_node_type == :symbol && child3.name == "c"
|
|
||||||
|
|
||||||
assert inner_list.parent_id == outer_list.id
|
|
||||||
assert length(inner_list.children) == 2
|
|
||||||
# 'b'
|
|
||||||
grandchild1 = TestHelpers.get_nth_child_node(nodes_map, 0, inner_list.id)
|
|
||||||
# '1'
|
|
||||||
grandchild2 = TestHelpers.get_nth_child_node(nodes_map, 1, inner_list.id)
|
|
||||||
|
|
||||||
assert grandchild1.ast_node_type == :symbol && grandchild1.name == "b"
|
|
||||||
assert grandchild2.ast_node_type == :literal_integer && grandchild2.value == 1
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses lists with varied spacing" do
|
|
||||||
source = "[ foo 1\nbar ]"
|
|
||||||
{list_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(list_node)
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.parent_id == file_node.id
|
|
||||||
assert length(list_node.children) == 3
|
|
||||||
|
|
||||||
children_names_values =
|
|
||||||
Enum.map(list_node.children, fn id ->
|
|
||||||
node = TestHelpers.get_node_by_id(nodes_map, id)
|
|
||||||
if node.ast_node_type == :symbol, do: node.name, else: node.value
|
|
||||||
end)
|
|
||||||
|
|
||||||
assert children_names_values == ["foo", 1, "bar"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unclosed list" do
|
|
||||||
source = "[foo bar"
|
|
||||||
{list_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# Expect 1 file_node, 1 list_expression node (error), 2 symbol nodes = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(list_node)
|
|
||||||
assert list_node.ast_node_type == :list_expression
|
|
||||||
assert list_node.parent_id == file_node.id
|
|
||||||
assert list_node.parsing_error == "Unclosed list"
|
|
||||||
# foo, bar
|
|
||||||
assert length(list_node.children) == 2
|
|
||||||
# "[foo bar"
|
|
||||||
assert list_node.location == [0, 1, 1, 8, 1, 9]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unexpected closing square bracket at top level" do
|
|
||||||
source = "foo ] bar"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node, foo, error_node_for_], bar = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
top_level_children =
|
|
||||||
Enum.map(file_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
error_node =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :unknown && &1.raw_string == "]"))
|
|
||||||
|
|
||||||
refute is_nil(error_node)
|
|
||||||
assert error_node.parent_id == file_node.id
|
|
||||||
assert error_node.parsing_error == "Unexpected ']'"
|
|
||||||
# location of "]"
|
|
||||||
assert error_node.location == [4, 1, 5, 5, 1, 6]
|
|
||||||
|
|
||||||
symbol_foo =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "foo"))
|
|
||||||
|
|
||||||
refute is_nil(symbol_foo)
|
|
||||||
assert symbol_foo.parent_id == file_node.id
|
|
||||||
|
|
||||||
symbol_bar =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "bar"))
|
|
||||||
|
|
||||||
refute is_nil(symbol_bar)
|
|
||||||
assert symbol_bar.parent_id == file_node.id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a list with mixed elements including strings, S-expressions, and other lists" do
|
|
||||||
source = "[1 'hello' (a b) [x y] 'end']"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
|
|
||||||
# Expected items: 1 outer list, 1 int, 1 str, 1 s-expr (with 2 sym children), 1 inner list (with 2 sym children), 1 str
|
|
||||||
# Node counts: outer_list (1) + int (1) + str (1) + s-expr (1) + sym_a (1) + sym_b (1) + inner_list (1) + sym_x (1) + sym_y (1) + str_end (1) = 10 nodes
|
|
||||||
# Plus 1 file_node = 11 nodes
|
|
||||||
assert map_size(nodes_map) == 11
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
outer_list_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(outer_list_node)
|
|
||||||
assert outer_list_node.ast_node_type == :list_expression
|
|
||||||
assert outer_list_node.parent_id == file_node.id
|
|
||||||
assert length(outer_list_node.children) == 5
|
|
||||||
|
|
||||||
children = Enum.map(outer_list_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
# Child 1: Integer 1
|
|
||||||
assert Enum.at(children, 0).ast_node_type == :literal_integer
|
|
||||||
assert Enum.at(children, 0).value == 1
|
|
||||||
|
|
||||||
# Child 2: String 'hello'
|
|
||||||
assert Enum.at(children, 1).ast_node_type == :literal_string
|
|
||||||
assert Enum.at(children, 1).value == "hello"
|
|
||||||
|
|
||||||
# Child 3: S-expression (a b)
|
|
||||||
s_expr_child = Enum.at(children, 2)
|
|
||||||
assert s_expr_child.ast_node_type == :s_expression
|
|
||||||
assert length(s_expr_child.children) == 2
|
|
||||||
|
|
||||||
s_expr_children =
|
|
||||||
Enum.map(s_expr_child.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(s_expr_children, & &1.name) == ["a", "b"]
|
|
||||||
|
|
||||||
# Child 4: List [x y]
|
|
||||||
inner_list_child = Enum.at(children, 3)
|
|
||||||
assert inner_list_child.ast_node_type == :list_expression
|
|
||||||
assert length(inner_list_child.children) == 2
|
|
||||||
|
|
||||||
inner_list_children =
|
|
||||||
Enum.map(inner_list_child.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(inner_list_children, & &1.name) == ["x", "y"]
|
|
||||||
|
|
||||||
# Child 5: String 'end'
|
|
||||||
assert Enum.at(children, 4).ast_node_type == :literal_string
|
|
||||||
assert Enum.at(children, 4).value == "end"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "symbol cannot contain brackets" do
|
|
||||||
# This should be parsed as symbol "foo[bar]" with current regex
|
|
||||||
source = "foo[bar]"
|
|
||||||
# After regex change to exclude brackets from symbols:
|
|
||||||
# It should be symbol "foo", then an unclosed list "[", then symbol "bar", then error for "]"
|
|
||||||
# Or, if we want `foo[bar]` to be an error or specific construct, tokenizer needs more rules.
|
|
||||||
# For now, with `[^\s\(\)\[\]]+`, "foo[bar]" is not a single symbol.
|
|
||||||
# It would be: "foo" (symbol), then "[" (start list), then "bar" (symbol), then error (unclosed list).
|
|
||||||
# Let's test "foo["
|
|
||||||
source1 = "foo["
|
|
||||||
{:ok, nodes_map1} = Parser.parse(source1)
|
|
||||||
# 1 file_node, "foo" symbol, "[" (unclosed list_expression) = 3 nodes
|
|
||||||
assert map_size(nodes_map1) == 3
|
|
||||||
file_node1 = Enum.find(Map.values(nodes_map1), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node1)
|
|
||||||
children1 = Enum.map(file_node1.children, &TestHelpers.get_node_by_id(nodes_map1, &1))
|
|
||||||
|
|
||||||
foo_node = Enum.find(children1, &(&1.name == "foo"))
|
|
||||||
list_node = Enum.find(children1, &(&1.ast_node_type == :list_expression))
|
|
||||||
|
|
||||||
refute is_nil(foo_node)
|
|
||||||
assert foo_node.parent_id == file_node1.id
|
|
||||||
refute is_nil(list_node)
|
|
||||||
assert list_node.parent_id == file_node1.id
|
|
||||||
assert list_node.parsing_error == "Unclosed list"
|
|
||||||
assert list_node.children == []
|
|
||||||
|
|
||||||
# Test "foo[bar"
|
|
||||||
source2 = "foo[bar"
|
|
||||||
{:ok, nodes_map2} = Parser.parse(source2)
|
|
||||||
# 1 file_node, "foo" symbol, "[" (list_expression), "bar" symbol inside list = 4 nodes
|
|
||||||
assert map_size(nodes_map2) == 4
|
|
||||||
file_node2 = Enum.find(Map.values(nodes_map2), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node2)
|
|
||||||
children2 = Enum.map(file_node2.children, &TestHelpers.get_node_by_id(nodes_map2, &1))
|
|
||||||
|
|
||||||
foo_node2 = Enum.find(children2, &(&1.name == "foo"))
|
|
||||||
list_node2 = Enum.find(children2, &(&1.ast_node_type == :list_expression))
|
|
||||||
|
|
||||||
refute is_nil(foo_node2)
|
|
||||||
assert foo_node2.parent_id == file_node2.id
|
|
||||||
refute is_nil(list_node2)
|
|
||||||
assert list_node2.parent_id == file_node2.id
|
|
||||||
assert list_node2.parsing_error == "Unclosed list"
|
|
||||||
assert length(list_node2.children) == 1
|
|
||||||
bar_node_id = hd(list_node2.children)
|
|
||||||
bar_node = TestHelpers.get_node_by_id(nodes_map2, bar_node_id)
|
|
||||||
assert bar_node.name == "bar"
|
|
||||||
assert bar_node.parent_id == list_node2.id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,267 +0,0 @@
|
|||||||
defmodule Til.TupleParserTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
describe "parse/2 - Tuple Expressions" do
|
|
||||||
test "parses an empty tuple" do
|
|
||||||
source = "{}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# file_node + tuple_node
|
|
||||||
assert map_size(nodes_map) == 2
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
tuple_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert tuple_node.parent_id == file_node.id
|
|
||||||
assert tuple_node.children == []
|
|
||||||
# "{}"
|
|
||||||
assert tuple_node.location == [0, 1, 1, 2, 1, 3]
|
|
||||||
assert tuple_node.raw_string == "{}"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple tuple with integers" do
|
|
||||||
source = "{1 22 -3}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node + 1 tuple_expression node + 3 integer nodes = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
tuple_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(tuple_node)
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert tuple_node.parent_id == file_node.id
|
|
||||||
assert length(tuple_node.children) == 3
|
|
||||||
|
|
||||||
children_nodes = Enum.map(tuple_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.value) == [1, 22, -3]
|
|
||||||
|
|
||||||
Enum.each(children_nodes, fn child ->
|
|
||||||
assert child.parent_id == tuple_node.id
|
|
||||||
assert child.ast_node_type == :literal_integer
|
|
||||||
end)
|
|
||||||
|
|
||||||
# "{1 22 -3}"
|
|
||||||
assert tuple_node.location == [0, 1, 1, 9, 1, 10]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a simple tuple with symbols" do
|
|
||||||
source = "{foo bar baz}"
|
|
||||||
{tuple_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# 1 file_node, 1 tuple_expr, 3 symbols = 5 nodes
|
|
||||||
assert map_size(nodes_map) == 5
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(tuple_node)
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert tuple_node.parent_id == file_node.id
|
|
||||||
assert length(tuple_node.children) == 3
|
|
||||||
children_nodes = Enum.map(tuple_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(children_nodes, & &1.name) == ["foo", "bar", "baz"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses nested tuples" do
|
|
||||||
source = "{a {b 1} c}"
|
|
||||||
{outer_tuple, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# Nodes: 1 file_node, outer_tuple, a, inner_tuple, c, b, 1 => 7 nodes
|
|
||||||
assert map_size(nodes_map) == 7
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(outer_tuple)
|
|
||||||
assert outer_tuple.ast_node_type == :tuple_expression
|
|
||||||
assert outer_tuple.parent_id == file_node.id
|
|
||||||
assert length(outer_tuple.children) == 3
|
|
||||||
|
|
||||||
child1 = TestHelpers.get_nth_child_node(nodes_map, 0, outer_tuple.id)
|
|
||||||
inner_tuple = TestHelpers.get_nth_child_node(nodes_map, 1, outer_tuple.id)
|
|
||||||
child3 = TestHelpers.get_nth_child_node(nodes_map, 2, outer_tuple.id)
|
|
||||||
|
|
||||||
assert child1.ast_node_type == :symbol && child1.name == "a"
|
|
||||||
assert inner_tuple.ast_node_type == :tuple_expression
|
|
||||||
assert child3.ast_node_type == :symbol && child3.name == "c"
|
|
||||||
|
|
||||||
assert inner_tuple.parent_id == outer_tuple.id
|
|
||||||
assert length(inner_tuple.children) == 2
|
|
||||||
grandchild1 = TestHelpers.get_nth_child_node(nodes_map, 0, inner_tuple.id)
|
|
||||||
grandchild2 = TestHelpers.get_nth_child_node(nodes_map, 1, inner_tuple.id)
|
|
||||||
|
|
||||||
assert grandchild1.ast_node_type == :symbol && grandchild1.name == "b"
|
|
||||||
assert grandchild2.ast_node_type == :literal_integer && grandchild2.value == 1
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses tuples with varied spacing" do
|
|
||||||
source = "{ foo 1\nbar }"
|
|
||||||
{tuple_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(tuple_node)
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert tuple_node.parent_id == file_node.id
|
|
||||||
assert length(tuple_node.children) == 3
|
|
||||||
|
|
||||||
children_names_values =
|
|
||||||
Enum.map(tuple_node.children, fn id ->
|
|
||||||
node = TestHelpers.get_node_by_id(nodes_map, id)
|
|
||||||
if node.ast_node_type == :symbol, do: node.name, else: node.value
|
|
||||||
end)
|
|
||||||
|
|
||||||
assert children_names_values == ["foo", 1, "bar"]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unclosed tuple" do
|
|
||||||
source = "{foo bar"
|
|
||||||
{tuple_node, nodes_map} = TestHelpers.parse_and_get_first_node(source)
|
|
||||||
# Expect 1 file_node, 1 tuple_expression node (error), 2 symbol nodes = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
refute is_nil(tuple_node)
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert tuple_node.parent_id == file_node.id
|
|
||||||
assert tuple_node.parsing_error == "Unclosed tuple"
|
|
||||||
# foo, bar
|
|
||||||
assert length(tuple_node.children) == 2
|
|
||||||
# "{foo bar"
|
|
||||||
assert tuple_node.location == [0, 1, 1, 8, 1, 9]
|
|
||||||
end
|
|
||||||
|
|
||||||
test "handles unexpected closing curly brace at top level" do
|
|
||||||
source = "foo } bar"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# 1 file_node, foo, error_node_for_}, bar = 4 nodes
|
|
||||||
assert map_size(nodes_map) == 4
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
|
|
||||||
top_level_children =
|
|
||||||
Enum.map(file_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
error_node =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :unknown && &1.raw_string == "}"))
|
|
||||||
|
|
||||||
refute is_nil(error_node)
|
|
||||||
assert error_node.parent_id == file_node.id
|
|
||||||
assert error_node.parsing_error == "Unexpected '}'"
|
|
||||||
# location of "}"
|
|
||||||
assert error_node.location == [4, 1, 5, 5, 1, 6]
|
|
||||||
|
|
||||||
symbol_foo =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "foo"))
|
|
||||||
|
|
||||||
refute is_nil(symbol_foo)
|
|
||||||
assert symbol_foo.parent_id == file_node.id
|
|
||||||
|
|
||||||
symbol_bar =
|
|
||||||
Enum.find(top_level_children, &(&1.ast_node_type == :symbol && &1.name == "bar"))
|
|
||||||
|
|
||||||
refute is_nil(symbol_bar)
|
|
||||||
assert symbol_bar.parent_id == file_node.id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "parses a tuple with mixed elements including strings, S-expressions, lists, and other tuples" do
|
|
||||||
source = "{1 'hello' (a b) [x y] {z} 'end'}"
|
|
||||||
{:ok, nodes_map} = Parser.parse(source)
|
|
||||||
|
|
||||||
# Expected items: 1 outer tuple, 1 int, 1 str, 1 s-expr (2 children), 1 list (2 children), 1 inner tuple (1 child), 1 str_end
|
|
||||||
# Node counts: outer_tuple (1) + int (1) + str (1) + s-expr (1) + sym_a (1) + sym_b (1) + list (1) + sym_x (1) + sym_y (1) + inner_tuple (1) + sym_z (1) + str_end (1) = 12 nodes
|
|
||||||
# Plus 1 file_node = 13 nodes
|
|
||||||
assert map_size(nodes_map) == 13
|
|
||||||
|
|
||||||
file_node = Enum.find(Map.values(nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node)
|
|
||||||
outer_tuple_node = TestHelpers.get_first_child_node(nodes_map)
|
|
||||||
|
|
||||||
refute is_nil(outer_tuple_node)
|
|
||||||
assert outer_tuple_node.ast_node_type == :tuple_expression
|
|
||||||
assert outer_tuple_node.parent_id == file_node.id
|
|
||||||
assert length(outer_tuple_node.children) == 6
|
|
||||||
|
|
||||||
children = Enum.map(outer_tuple_node.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.at(children, 0).ast_node_type == :literal_integer
|
|
||||||
assert Enum.at(children, 0).value == 1
|
|
||||||
|
|
||||||
assert Enum.at(children, 1).ast_node_type == :literal_string
|
|
||||||
assert Enum.at(children, 1).value == "hello"
|
|
||||||
|
|
||||||
s_expr_child = Enum.at(children, 2)
|
|
||||||
assert s_expr_child.ast_node_type == :s_expression
|
|
||||||
assert length(s_expr_child.children) == 2
|
|
||||||
|
|
||||||
s_expr_children =
|
|
||||||
Enum.map(s_expr_child.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(s_expr_children, & &1.name) == ["a", "b"]
|
|
||||||
|
|
||||||
list_child = Enum.at(children, 3)
|
|
||||||
assert list_child.ast_node_type == :list_expression
|
|
||||||
assert length(list_child.children) == 2
|
|
||||||
list_children = Enum.map(list_child.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
assert Enum.map(list_children, & &1.name) == ["x", "y"]
|
|
||||||
|
|
||||||
inner_tuple_child = Enum.at(children, 4)
|
|
||||||
assert inner_tuple_child.ast_node_type == :tuple_expression
|
|
||||||
assert length(inner_tuple_child.children) == 1
|
|
||||||
|
|
||||||
inner_tuple_children =
|
|
||||||
Enum.map(inner_tuple_child.children, &TestHelpers.get_node_by_id(nodes_map, &1))
|
|
||||||
|
|
||||||
assert Enum.map(inner_tuple_children, & &1.name) == ["z"]
|
|
||||||
|
|
||||||
assert Enum.at(children, 5).ast_node_type == :literal_string
|
|
||||||
assert Enum.at(children, 5).value == "end"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "symbol cannot contain curly braces" do
|
|
||||||
# Test "foo{"
|
|
||||||
source1 = "foo{"
|
|
||||||
{:ok, nodes_map1} = Parser.parse(source1)
|
|
||||||
# 1 file_node, "foo" symbol, "{" (unclosed tuple_expression) = 3 nodes
|
|
||||||
assert map_size(nodes_map1) == 3
|
|
||||||
file_node1 = Enum.find(Map.values(nodes_map1), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node1)
|
|
||||||
children1 = Enum.map(file_node1.children, &TestHelpers.get_node_by_id(nodes_map1, &1))
|
|
||||||
|
|
||||||
foo_node = Enum.find(children1, &(&1.name == "foo"))
|
|
||||||
tuple_node = Enum.find(children1, &(&1.ast_node_type == :tuple_expression))
|
|
||||||
|
|
||||||
refute is_nil(foo_node)
|
|
||||||
assert foo_node.parent_id == file_node1.id
|
|
||||||
refute is_nil(tuple_node)
|
|
||||||
assert tuple_node.parent_id == file_node1.id
|
|
||||||
assert tuple_node.parsing_error == "Unclosed tuple"
|
|
||||||
assert tuple_node.children == []
|
|
||||||
|
|
||||||
# Test "foo{bar"
|
|
||||||
source2 = "foo{bar"
|
|
||||||
{:ok, nodes_map2} = Parser.parse(source2)
|
|
||||||
# 1 file_node, "foo" symbol, "{" (tuple_expression), "bar" symbol inside tuple = 4 nodes
|
|
||||||
assert map_size(nodes_map2) == 4
|
|
||||||
file_node2 = Enum.find(Map.values(nodes_map2), &(&1.ast_node_type == :file))
|
|
||||||
refute is_nil(file_node2)
|
|
||||||
children2 = Enum.map(file_node2.children, &TestHelpers.get_node_by_id(nodes_map2, &1))
|
|
||||||
|
|
||||||
foo_node2 = Enum.find(children2, &(&1.name == "foo"))
|
|
||||||
tuple_node2 = Enum.find(children2, &(&1.ast_node_type == :tuple_expression))
|
|
||||||
|
|
||||||
refute is_nil(foo_node2)
|
|
||||||
assert foo_node2.parent_id == file_node2.id
|
|
||||||
refute is_nil(tuple_node2)
|
|
||||||
assert tuple_node2.parent_id == file_node2.id
|
|
||||||
assert tuple_node2.parsing_error == "Unclosed tuple"
|
|
||||||
assert length(tuple_node2.children) == 1
|
|
||||||
bar_node_id = hd(tuple_node2.children)
|
|
||||||
bar_node = TestHelpers.get_node_by_id(nodes_map2, bar_node_id)
|
|
||||||
assert bar_node.name == "bar"
|
|
||||||
assert bar_node.parent_id == tuple_node2.id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,291 +0,0 @@
|
|||||||
defmodule Til.TypeAnnotationTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Til.AstUtils
|
|
||||||
# alias Til.Parser # Unused
|
|
||||||
# alias Til.Typer # Unused
|
|
||||||
# Added alias
|
|
||||||
alias Til.TestHelpers
|
|
||||||
|
|
||||||
# --- Predefined Type Definitions for Assertions (without :id) ---
|
|
||||||
# These must match the definitions in Typer, excluding the :id field
|
|
||||||
# that Typer might add when interning.
|
|
||||||
@type_integer %{type_kind: :primitive, name: :integer}
|
|
||||||
@type_string %{type_kind: :primitive, name: :string}
|
|
||||||
@type_number %{type_kind: :primitive, name: :number}
|
|
||||||
@type_any %{type_kind: :primitive, name: :any}
|
|
||||||
# @type_nothing %{type_kind: :primitive, name: :nothing} # Unused
|
|
||||||
# @type_annotation_mismatch_error is replaced by a helper function
|
|
||||||
|
|
||||||
# Helper to create the expected *cleaned* type_annotation_mismatch error structure
|
|
||||||
defp type_error_type_annotation_mismatch(actual_type_clean, expected_type_clean) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :type_annotation_mismatch,
|
|
||||||
actual_type: actual_type_clean,
|
|
||||||
expected_type: expected_type_clean
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Union type for testing
|
|
||||||
# @type_integer_or_string %{ # Unused
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: MapSet.new([@type_integer, @type_string])
|
|
||||||
# }
|
|
||||||
|
|
||||||
defp type_literal_int(val), do: %{type_kind: :literal, value: val}
|
|
||||||
defp type_literal_string(val), do: %{type_kind: :literal, value: val}
|
|
||||||
|
|
||||||
defp type_union(type_defs_list) do
|
|
||||||
%{type_kind: :union, types: MapSet.new(type_defs_list)}
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "(the <type> <expr>) annotation" do
|
|
||||||
test "annotates a literal integer with its correct type" do
|
|
||||||
source = "(the integer 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
assert the_expr_node.ast_node_type == :s_expression
|
|
||||||
|
|
||||||
# The 'the' expression itself should have the annotated type
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_integer)
|
|
||||||
|
|
||||||
# The inner literal should still have its literal type
|
|
||||||
literal_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, the_expr_node.id)
|
|
||||||
TestHelpers.assert_node_typed_as(literal_node, typed_nodes_map, type_literal_int(42))
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotates a literal integer with a supertype (number)" do
|
|
||||||
source = "(the number 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_number)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotates a literal string with its correct type" do
|
|
||||||
source = "(the string 'hello')"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_string)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotation mismatch: annotating integer literal as string results in error type" do
|
|
||||||
source = "(the string 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# The 'the' expression should now be typed with an error type
|
|
||||||
# because 42 is not a subtype of string.
|
|
||||||
expected_error_def =
|
|
||||||
type_error_type_annotation_mismatch(type_literal_int(42), @type_string)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(
|
|
||||||
the_expr_node,
|
|
||||||
typed_nodes_map,
|
|
||||||
expected_error_def
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "assignment with annotated value: (= x (the integer 42))" do
|
|
||||||
source = "(= x (the integer 42))"
|
|
||||||
{assignment_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
assert assignment_node.ast_node_type == :s_expression
|
|
||||||
|
|
||||||
# The assignment expression's type is the type of its RHS (the annotated value)
|
|
||||||
TestHelpers.assert_node_typed_as(assignment_node, typed_nodes_map, @type_integer)
|
|
||||||
|
|
||||||
# The 'the' sub-expression
|
|
||||||
the_expr_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, assignment_node.id)
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_integer)
|
|
||||||
|
|
||||||
# Check symbol 'x' in the environment (indirectly by typing another expression)
|
|
||||||
source_with_x = """
|
|
||||||
(= x (the integer 42))
|
|
||||||
x
|
|
||||||
"""
|
|
||||||
|
|
||||||
# 0 is assignment, 1 is 'x'
|
|
||||||
{x_usage_node, typed_nodes_map_vx} =
|
|
||||||
TestHelpers.typecheck_and_get_nth_node(source_with_x, 1)
|
|
||||||
|
|
||||||
assert x_usage_node.ast_node_type == :symbol
|
|
||||||
TestHelpers.assert_node_typed_as(x_usage_node, typed_nodes_map_vx, @type_integer)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotating with 'any' type" do
|
|
||||||
source = "(the any 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_any)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotating with 'nothing' type (implies contradiction if expr is not nothing)" do
|
|
||||||
# 42 is not 'nothing'
|
|
||||||
source = "(the nothing 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
AstUtils.build_debug_ast_data(typed_nodes_map)
|
|
||||||
|
|
||||||
# The expression results in a type annotation mismatch error
|
|
||||||
# because 'literal 42' is not a subtype of 'nothing'.
|
|
||||||
type_nothing_clean = %{type_kind: :primitive, name: :nothing}
|
|
||||||
|
|
||||||
expected_error_def =
|
|
||||||
type_error_type_annotation_mismatch(type_literal_int(42), type_nothing_clean)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(
|
|
||||||
the_expr_node,
|
|
||||||
typed_nodes_map,
|
|
||||||
expected_error_def
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "unknown type symbol in annotation defaults to 'any'" do
|
|
||||||
# 'foobar' is not a known type
|
|
||||||
source = "(the foobar 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# Defaulting to 'any' for unknown type specifiers for now
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, @type_any)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "annotating an (if true 1 's') expression with (integer | string) union type" do
|
|
||||||
_source = "(the (union integer string) (if true 1 's'))"
|
|
||||||
# For the Typer to resolve (union integer string) correctly, we need to implement
|
|
||||||
# parsing/handling of such type specifiers first.
|
|
||||||
# For now, this test assumes `resolve_type_specifier_node` can handle it or we mock it.
|
|
||||||
# Let's adjust the test to use a known type that an `if` can produce,
|
|
||||||
# and then check subtyping against a broader union.
|
|
||||||
|
|
||||||
# Test: integer is subtype of (integer | string)
|
|
||||||
_source_int_subtype_of_union = "(the (union integer string) 42)"
|
|
||||||
# This requires `resolve_type_specifier_node` to handle `(union ...)` forms.
|
|
||||||
# For now, let's test the subtyping logic more directly by constructing types.
|
|
||||||
# The tests below will focus on `if` expressions producing unions and checking them.
|
|
||||||
end
|
|
||||||
|
|
||||||
test "if expression producing (literal 1 | literal 's') is subtype of (integer | string)" do
|
|
||||||
_source = """
|
|
||||||
(the (union integer string)
|
|
||||||
(if some_condition 1 's'))
|
|
||||||
"""
|
|
||||||
|
|
||||||
# This test requires:
|
|
||||||
# 1. `(union integer string)` to be resolvable by `resolve_type_specifier_node`.
|
|
||||||
# For now, we'll assume `Typer` needs to be taught to parse this.
|
|
||||||
# Let's simplify the annotation to a known primitive that acts as a supertype for the union.
|
|
||||||
# e.g. (the any (if ...))
|
|
||||||
#
|
|
||||||
# Let's test the type inference of `if` and then `(the ...)` with it.
|
|
||||||
# `(if x 1 "s")` should infer to `(union (literal 1) (literal "s"))` if x is unknown.
|
|
||||||
# Then `(the (union integer string) (if x 1 "s"))` should work.
|
|
||||||
|
|
||||||
# To make this testable *now* without changing `resolve_type_specifier_node` for `(union ...)`:
|
|
||||||
# We need a way to introduce a union type into the system that `is_subtype?` can then check.
|
|
||||||
# The `if` expression itself is the source of the union type.
|
|
||||||
|
|
||||||
# `(if some_condition 1 "foo")` will have type `Union(Literal(1), Literal("foo"))`
|
|
||||||
# We want to check if `Union(Literal(1), Literal("foo"))` is a subtype of `Union(Integer, String)`
|
|
||||||
# And also if `Union(Literal(1), Literal("foo"))` is a subtype of `Any`
|
|
||||||
|
|
||||||
# Setup:
|
|
||||||
# (= cond_val true) ; or some unknown symbol to make the `if` branch ambiguous
|
|
||||||
# (the some_union_type (if cond_val 1 "hello"))
|
|
||||||
# where `some_union_type` is a type that `Union(Literal(1), Literal("hello"))` is a subtype of.
|
|
||||||
|
|
||||||
# Test 1: Union is subtype of Any
|
|
||||||
source_if_any = """
|
|
||||||
(= cond some_unknown_symbol_for_ambiguity)
|
|
||||||
(the any (if cond 1 "hello"))
|
|
||||||
"""
|
|
||||||
|
|
||||||
{the_expr_node_any, typed_nodes_map_any} =
|
|
||||||
TestHelpers.typecheck_and_get_nth_node(source_if_any, 1)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node_any, typed_nodes_map_any, @type_any)
|
|
||||||
|
|
||||||
# Test 2: Literal integer is subtype of (integer | string)
|
|
||||||
# This requires `(union integer string)` to be a recognized type specifier.
|
|
||||||
# We will add this to `resolve_type_specifier_node` in Typer later.
|
|
||||||
# For now, let's assume we have a way to define such a type.
|
|
||||||
# The following tests are more conceptual for the subtyping logic itself,
|
|
||||||
# rather than the full `(the (union ...) ...)` syntax.
|
|
||||||
|
|
||||||
# Let's test the `if` expression's inferred type first.
|
|
||||||
source_if_expr = "(= cond some_ambiguous_val) (if cond 1 'one')"
|
|
||||||
{if_node, typed_nodes_map_if} = TestHelpers.typecheck_and_get_nth_node(source_if_expr, 1)
|
|
||||||
|
|
||||||
expected_if_type =
|
|
||||||
type_union([type_literal_int(1), type_literal_string("one")])
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(if_node, typed_nodes_map_if, expected_if_type)
|
|
||||||
|
|
||||||
# Now, let's assume `(the (union integer string) ...)` works by enhancing `resolve_type_specifier_node`
|
|
||||||
# This part of the test will *fail* until `resolve_type_specifier_node` is updated.
|
|
||||||
# We'll mark it as a future improvement or adjust `Typer` in a subsequent step.
|
|
||||||
# For now, we are testing the subtyping rules given the types are correctly resolved.
|
|
||||||
|
|
||||||
# To test subtyping with (the X Y) where Y is a union:
|
|
||||||
# (the super_type (if cond val1 val2))
|
|
||||||
# Example: (the any (if cond 1 "s")) -- already covered, works.
|
|
||||||
|
|
||||||
# Example: (the number (if cond 1 2.0))
|
|
||||||
# (if cond 1 2.0) -> Union(Literal(1), Literal(2.0))
|
|
||||||
# Union(Literal(1), Literal(2.0)) <: Number ?
|
|
||||||
# Literal(1) <: Number (true) AND Literal(2.0) <: Number (true) -> true
|
|
||||||
source_if_number = """
|
|
||||||
(= cond some_ambiguous_val)
|
|
||||||
(the number (if cond 1 2))
|
|
||||||
"""
|
|
||||||
|
|
||||||
{the_expr_node_num, typed_nodes_map_num} =
|
|
||||||
TestHelpers.typecheck_and_get_nth_node(source_if_number, 1)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node_num, typed_nodes_map_num, @type_number)
|
|
||||||
|
|
||||||
# Example: (the integer (if cond 1 2))
|
|
||||||
# (if cond 1 2) -> Union(Literal(1), Literal(2))
|
|
||||||
# Union(Literal(1), Literal(2)) <: Integer ?
|
|
||||||
# Literal(1) <: Integer (true) AND Literal(2) <: Integer (true) -> true
|
|
||||||
source_if_integer = """
|
|
||||||
(= cond some_ambiguous_val)
|
|
||||||
(the integer (if cond 1 2))
|
|
||||||
"""
|
|
||||||
|
|
||||||
{the_expr_node_int, typed_nodes_map_int} =
|
|
||||||
TestHelpers.typecheck_and_get_nth_node(source_if_integer, 1)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node_int, typed_nodes_map_int, @type_integer)
|
|
||||||
|
|
||||||
# Example: (the string (if cond 1 "s")) -> should be error
|
|
||||||
# (if cond 1 "s") -> Union(Literal(1), Literal("s"))
|
|
||||||
# Union(Literal(1), Literal("s")) <: String ?
|
|
||||||
# Literal(1) <: String (false) -> false
|
|
||||||
source_if_string_error = """
|
|
||||||
(= cond some_ambiguous_val)
|
|
||||||
(the string (if cond 1 's'))
|
|
||||||
"""
|
|
||||||
|
|
||||||
{the_expr_node_str_err, typed_nodes_map_str_err} =
|
|
||||||
TestHelpers.typecheck_and_get_nth_node(source_if_string_error, 1)
|
|
||||||
|
|
||||||
# actual type of (if cond 1 "s") is Union(Literal(1), Literal("s"))
|
|
||||||
# expected type is String
|
|
||||||
actual_type_clean = type_union([type_literal_int(1), type_literal_string("s")])
|
|
||||||
expected_annotated_clean = @type_string
|
|
||||||
|
|
||||||
expected_error_def =
|
|
||||||
type_error_type_annotation_mismatch(actual_type_clean, expected_annotated_clean)
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(
|
|
||||||
the_expr_node_str_err,
|
|
||||||
typed_nodes_map_str_err,
|
|
||||||
expected_error_def
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Test for: A <: (A | B)
|
|
||||||
test "integer is subtype of (integer | string) - requires (union ...) type specifier" do
|
|
||||||
source = "(the (union integer string) 42)"
|
|
||||||
{the_expr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
expected_union_type = type_union([@type_integer, @type_string])
|
|
||||||
TestHelpers.assert_node_typed_as(the_expr_node, typed_nodes_map, expected_union_type)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,257 +0,0 @@
|
|||||||
defmodule Til.TypeAtomTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
# alias Til.Parser # Unused
|
|
||||||
# alias Til.Typer # Unused
|
|
||||||
alias Til.Typer.Types
|
|
||||||
alias Til.Typer.SubtypeChecker
|
|
||||||
alias Til.Typer.Interner
|
|
||||||
|
|
||||||
import Til.TestHelpers
|
|
||||||
|
|
||||||
# Helper for literal atom type definition (cleaned, without :id)
|
|
||||||
defp type_literal_atom(val) when is_atom(val) do
|
|
||||||
%{type_kind: :literal, value: val}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper for primitive atom type definition (cleaned, without :id)
|
|
||||||
defp type_primitive_atom do
|
|
||||||
%{type_kind: :primitive, name: :atom}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper for primitive integer type definition (cleaned, without :id)
|
|
||||||
defp type_primitive_integer do
|
|
||||||
%{type_kind: :primitive, name: :integer}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Helper for type annotation mismatch error (cleaned structure)
|
|
||||||
defp type_error_type_annotation_mismatch(actual_type_clean, expected_type_clean) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :type_annotation_mismatch,
|
|
||||||
actual_type: actual_type_clean,
|
|
||||||
expected_type: expected_type_clean
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Atom Literal Typing" do
|
|
||||||
test "a literal atom is typed as a literal atom" do
|
|
||||||
source = ":my_atom"
|
|
||||||
{node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
assert_node_typed_as(node, typed_nodes_map, type_literal_atom(:my_atom))
|
|
||||||
end
|
|
||||||
|
|
||||||
test "a different literal atom is typed correctly" do
|
|
||||||
source = ":another"
|
|
||||||
{node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
assert_node_typed_as(node, typed_nodes_map, type_literal_atom(:another))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "'the' expression with 'atom' type specifier" do
|
|
||||||
test "(the atom :some_atom) is typed as primitive atom" do
|
|
||||||
source = "(the atom :some_atom)"
|
|
||||||
{the_expr_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
# The 'the' expression itself should have the annotated type
|
|
||||||
assert_node_typed_as(the_expr_node, typed_nodes_map, type_primitive_atom())
|
|
||||||
|
|
||||||
# Verify the inner atom is typed as literal
|
|
||||||
# Children of s_expr: 'the' symbol, 'atom' symbol, ':some_atom' atom_literal
|
|
||||||
inner_atom_node_id = Enum.at(the_expr_node.children, 2)
|
|
||||||
inner_atom_node = get_node_by_id(typed_nodes_map, inner_atom_node_id)
|
|
||||||
assert_node_typed_as(inner_atom_node, typed_nodes_map, type_literal_atom(:some_atom))
|
|
||||||
end
|
|
||||||
|
|
||||||
test "(the atom 123) results in a type annotation mismatch error" do
|
|
||||||
source = "(the atom 123)"
|
|
||||||
{the_expr_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
# Literal integer
|
|
||||||
actual_clean = %{type_kind: :literal, value: 123}
|
|
||||||
# Primitive atom
|
|
||||||
expected_clean = type_primitive_atom()
|
|
||||||
|
|
||||||
expected_error_def = type_error_type_annotation_mismatch(actual_clean, expected_clean)
|
|
||||||
assert_node_typed_as(the_expr_node, typed_nodes_map, expected_error_def)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "(the atom \"a string\") results in a type annotation mismatch error" do
|
|
||||||
source = "(the atom 'a string')"
|
|
||||||
{the_expr_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
# Literal string
|
|
||||||
actual_clean = %{type_kind: :literal, value: "a string"}
|
|
||||||
# Primitive atom
|
|
||||||
expected_clean = type_primitive_atom()
|
|
||||||
|
|
||||||
expected_error_def = type_error_type_annotation_mismatch(actual_clean, expected_clean)
|
|
||||||
assert_node_typed_as(the_expr_node, typed_nodes_map, expected_error_def)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Atom Subtyping" do
|
|
||||||
# Setup a nodes_map with predefined types for subtyping checks
|
|
||||||
defp setup_nodes_map_for_subtyping do
|
|
||||||
Interner.populate_known_types(%{})
|
|
||||||
end
|
|
||||||
|
|
||||||
test "literal atom is a subtype of primitive atom" do
|
|
||||||
nodes_map_initial = setup_nodes_map_for_subtyping()
|
|
||||||
# This is %{type_kind: :literal, value: :foo}
|
|
||||||
literal_foo_raw = type_literal_atom(:foo)
|
|
||||||
|
|
||||||
# Intern the literal type
|
|
||||||
{foo_key, nodes_map_after_foo} =
|
|
||||||
Interner.get_or_intern_type(literal_foo_raw, nodes_map_initial)
|
|
||||||
|
|
||||||
literal_foo_interned = Map.get(nodes_map_after_foo, foo_key)
|
|
||||||
refute is_nil(literal_foo_interned), "Interned literal :foo type should exist"
|
|
||||||
|
|
||||||
# Get the interned primitive type from the map
|
|
||||||
primitive_atom_interned = Map.get(nodes_map_after_foo, Types.primitive_type_key(:atom))
|
|
||||||
refute is_nil(primitive_atom_interned), "Primitive atom type should be in nodes_map"
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(
|
|
||||||
literal_foo_interned,
|
|
||||||
primitive_atom_interned,
|
|
||||||
nodes_map_after_foo
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "literal atom is a subtype of itself" do
|
|
||||||
nodes_map_initial = setup_nodes_map_for_subtyping()
|
|
||||||
literal_foo_raw = type_literal_atom(:foo)
|
|
||||||
|
|
||||||
{foo_key, nodes_map_after_foo} =
|
|
||||||
Interner.get_or_intern_type(literal_foo_raw, nodes_map_initial)
|
|
||||||
|
|
||||||
literal_foo_interned = Map.get(nodes_map_after_foo, foo_key)
|
|
||||||
refute is_nil(literal_foo_interned), "Interned literal :foo type should exist"
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(
|
|
||||||
literal_foo_interned,
|
|
||||||
literal_foo_interned,
|
|
||||||
nodes_map_after_foo
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "literal atom :foo is not a subtype of literal atom :bar" do
|
|
||||||
nodes_map_initial = setup_nodes_map_for_subtyping()
|
|
||||||
literal_foo_raw = type_literal_atom(:foo)
|
|
||||||
literal_bar_raw = type_literal_atom(:bar)
|
|
||||||
|
|
||||||
{foo_key, nodes_map_temp} = Interner.get_or_intern_type(literal_foo_raw, nodes_map_initial)
|
|
||||||
literal_foo_interned = Map.get(nodes_map_temp, foo_key)
|
|
||||||
refute is_nil(literal_foo_interned)
|
|
||||||
|
|
||||||
{bar_key, nodes_map_final} = Interner.get_or_intern_type(literal_bar_raw, nodes_map_temp)
|
|
||||||
literal_bar_interned = Map.get(nodes_map_final, bar_key)
|
|
||||||
refute is_nil(literal_bar_interned)
|
|
||||||
|
|
||||||
refute SubtypeChecker.is_subtype?(
|
|
||||||
literal_foo_interned,
|
|
||||||
literal_bar_interned,
|
|
||||||
nodes_map_final
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "primitive integer is not a subtype of primitive atom" do
|
|
||||||
nodes_map = setup_nodes_map_for_subtyping()
|
|
||||||
|
|
||||||
primitive_int_interned = Map.get(nodes_map, Types.primitive_type_key(:integer))
|
|
||||||
refute is_nil(primitive_int_interned), "Primitive integer type should be in nodes_map"
|
|
||||||
|
|
||||||
primitive_atom_interned = Map.get(nodes_map, Types.primitive_type_key(:atom))
|
|
||||||
refute is_nil(primitive_atom_interned), "Primitive atom type should be in nodes_map"
|
|
||||||
|
|
||||||
refute SubtypeChecker.is_subtype?(
|
|
||||||
primitive_int_interned,
|
|
||||||
primitive_atom_interned,
|
|
||||||
nodes_map
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "primitive atom is not a subtype of primitive integer" do
|
|
||||||
nodes_map = setup_nodes_map_for_subtyping()
|
|
||||||
|
|
||||||
primitive_atom_interned = Map.get(nodes_map, Types.primitive_type_key(:atom))
|
|
||||||
refute is_nil(primitive_atom_interned), "Primitive atom type should be in nodes_map"
|
|
||||||
|
|
||||||
primitive_int_interned = Map.get(nodes_map, Types.primitive_type_key(:integer))
|
|
||||||
refute is_nil(primitive_int_interned), "Primitive integer type should be in nodes_map"
|
|
||||||
|
|
||||||
refute SubtypeChecker.is_subtype?(
|
|
||||||
primitive_atom_interned,
|
|
||||||
primitive_int_interned,
|
|
||||||
nodes_map
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "literal atom is a subtype of :any" do
|
|
||||||
nodes_map_initial = setup_nodes_map_for_subtyping()
|
|
||||||
literal_foo_raw = type_literal_atom(:foo)
|
|
||||||
|
|
||||||
{foo_key, nodes_map_after_foo} =
|
|
||||||
Interner.get_or_intern_type(literal_foo_raw, nodes_map_initial)
|
|
||||||
|
|
||||||
literal_foo_interned = Map.get(nodes_map_after_foo, foo_key)
|
|
||||||
refute is_nil(literal_foo_interned), "Interned literal :foo type should exist"
|
|
||||||
|
|
||||||
any_type_interned = Map.get(nodes_map_after_foo, Types.any_type_key())
|
|
||||||
refute is_nil(any_type_interned), ":any type should be in nodes_map"
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(
|
|
||||||
literal_foo_interned,
|
|
||||||
any_type_interned,
|
|
||||||
nodes_map_after_foo
|
|
||||||
)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "primitive atom is a subtype of :any" do
|
|
||||||
nodes_map = setup_nodes_map_for_subtyping()
|
|
||||||
|
|
||||||
primitive_atom_interned = Map.get(nodes_map, Types.primitive_type_key(:atom))
|
|
||||||
refute is_nil(primitive_atom_interned), "Primitive atom type should be in nodes_map"
|
|
||||||
|
|
||||||
any_type_interned = Map.get(nodes_map, Types.any_type_key())
|
|
||||||
refute is_nil(any_type_interned), ":any type should be in nodes_map"
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(primitive_atom_interned, any_type_interned, nodes_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
test ":nothing is a subtype of primitive atom" do
|
|
||||||
nodes_map = setup_nodes_map_for_subtyping()
|
|
||||||
|
|
||||||
nothing_type_interned = Map.get(nodes_map, Types.primitive_type_key(:nothing))
|
|
||||||
refute is_nil(nothing_type_interned), ":nothing type should be in nodes_map"
|
|
||||||
|
|
||||||
primitive_atom_interned = Map.get(nodes_map, Types.primitive_type_key(:atom))
|
|
||||||
refute is_nil(primitive_atom_interned), "Primitive atom type should be in nodes_map"
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(nothing_type_interned, primitive_atom_interned, nodes_map)
|
|
||||||
end
|
|
||||||
|
|
||||||
test ":nothing is a subtype of literal atom :foo (this is debatable, but current rule is :nothing <: all)" do
|
|
||||||
nodes_map_initial = setup_nodes_map_for_subtyping()
|
|
||||||
|
|
||||||
nothing_type_interned = Map.get(nodes_map_initial, Types.primitive_type_key(:nothing))
|
|
||||||
refute is_nil(nothing_type_interned), ":nothing type should be in nodes_map"
|
|
||||||
|
|
||||||
literal_foo_raw = type_literal_atom(:foo)
|
|
||||||
|
|
||||||
{foo_key, nodes_map_after_foo} =
|
|
||||||
Interner.get_or_intern_type(literal_foo_raw, nodes_map_initial)
|
|
||||||
|
|
||||||
literal_foo_interned = Map.get(nodes_map_after_foo, foo_key)
|
|
||||||
refute is_nil(literal_foo_interned), "Interned literal :foo type should exist"
|
|
||||||
|
|
||||||
# According to Rule 3: :nothing is a subtype of everything
|
|
||||||
assert SubtypeChecker.is_subtype?(
|
|
||||||
nothing_type_interned,
|
|
||||||
literal_foo_interned,
|
|
||||||
nodes_map_after_foo
|
|
||||||
)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,135 +0,0 @@
|
|||||||
# defmodule Til.TypeFunctionAllocationTest do
|
|
||||||
# use ExUnit.Case, async: true
|
|
||||||
#
|
|
||||||
# alias Til.Typer.Interner
|
|
||||||
# alias Til.Parser
|
|
||||||
# alias Til.Typer
|
|
||||||
# # alias Til.Typer.SubtypeChecker # Not directly used in all tests yet
|
|
||||||
# import Til.TestHelpers
|
|
||||||
#
|
|
||||||
# # Helper functions for expected error types
|
|
||||||
# defp type_error_not_a_function(actual_operator_type_id_clean) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :error,
|
|
||||||
# reason: :not_a_function,
|
|
||||||
# actual_operator_type_id: actual_operator_type_id_clean
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# defp type_error_arity_mismatch(expected_arity, actual_arity, function_type_id_clean) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :error,
|
|
||||||
# reason: :arity_mismatch,
|
|
||||||
# expected_arity: expected_arity,
|
|
||||||
# actual_arity: actual_arity,
|
|
||||||
# function_type_id: function_type_id_clean
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "Phase 3: Basic Monomorphic Function Calls" do
|
|
||||||
# test "types a call to a literal lambda with no args: ((fn () 1))" do
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node("((fn () 1))")
|
|
||||||
# expected_return_type_raw = %{type_kind: :literal, value: 1}
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_return_type_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a call to a literal lambda with one arg: ((fn ((x string) string) x) 'hello')" do
|
|
||||||
# source = "((fn ((x string) string) x) 'hello')"
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# expected_return_type_raw = %{type_kind: :literal, value: "hello"}
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_return_type_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a call to a literal lambda with multiple args: ((fn ((a integer) (b atom) atom) b) 10 :foo)" do
|
|
||||||
# source = "((fn ((a integer) (b atom) atom) b) 10 :foo)"
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# expected_return_type_raw = %{type_kind: :literal, value: :foo}
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_return_type_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "error: calling a non-function (integer): (1 2 3)" do
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node("(1 2 3)")
|
|
||||||
# raw_literal_int_1_type = %{type_kind: :literal, value: 1}
|
|
||||||
# # Interner.populate_known_types is called by typecheck_and_get_first_node
|
|
||||||
# # so typed_nodes_map should already have primitives.
|
|
||||||
# # We need to intern the specific literal type to get its key.
|
|
||||||
# {literal_int_1_key, _final_map_after_intern} =
|
|
||||||
# Interner.get_or_intern_type(raw_literal_int_1_type, typed_nodes_map)
|
|
||||||
#
|
|
||||||
# expected_error_raw = type_error_not_a_function(literal_int_1_key)
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_error_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "error: arity mismatch - too many arguments: ((fn () 1) 123)" do
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node("((fn () 1) 123)")
|
|
||||||
# # The S-expression is the call_node. Its first child is the lambda S-expression.
|
|
||||||
# # The lambda S-expression node itself is what gets typed as a function.
|
|
||||||
# # The parser creates a :lambda_expression node from the (fn () 1) S-expression.
|
|
||||||
# # So, the first child of call_node is the :lambda_expression node.
|
|
||||||
# lambda_node = get_first_child_node(typed_nodes_map, call_node.id)
|
|
||||||
# function_type_id = lambda_node.type_id
|
|
||||||
#
|
|
||||||
# expected_error_raw = type_error_arity_mismatch(0, 1, function_type_id)
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_error_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "error: arity mismatch - too few arguments: ((fn (x y) x) 1)" do
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node("((fn (x y integer) x) 1)")
|
|
||||||
# lambda_node = get_first_child_node(typed_nodes_map, call_node.id)
|
|
||||||
# function_type_id = lambda_node.type_id
|
|
||||||
#
|
|
||||||
# expected_error_raw = type_error_arity_mismatch(2, 1, function_type_id)
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_error_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a call where operator is an S-expression evaluating to a function: ((if true (fn () :ok) (fn () :err)))" do
|
|
||||||
# source = "((if true (fn () :ok) (fn () :err)))"
|
|
||||||
# {call_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# expected_return_type_raw = %{type_kind: :literal, value: :ok}
|
|
||||||
# assert_node_typed_as(call_node, typed_nodes_map, expected_return_type_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a call where operator is a symbol bound to a function: (= id (fn ((z integer) integer) z)) (id 42)" do
|
|
||||||
# source = """
|
|
||||||
# (= id (fn ((z integer) integer) z))
|
|
||||||
# (id 42)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {:ok, parsed_map} = Parser.parse(source)
|
|
||||||
# {:ok, typed_map} = Typer.type_check(parsed_map)
|
|
||||||
#
|
|
||||||
# file_node = get_file_node_from_map(typed_map)
|
|
||||||
# call_node_id = Enum.at(file_node.children, 1)
|
|
||||||
# call_node = get_node_by_id(typed_map, call_node_id)
|
|
||||||
#
|
|
||||||
# expected_return_type_raw = %{type_kind: :literal, value: 42}
|
|
||||||
# assert_node_typed_as(call_node, typed_map, expected_return_type_raw)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # Optional: Argument type mismatch test (deferred as lambdas currently take `any`)
|
|
||||||
# # test "error: argument type mismatch: ((the (function integer string) (fn (x) (the string x))) 'bad_arg')" do
|
|
||||||
# # source = "((the (function integer string) (fn (x) (the string x))) 'bad_arg')"
|
|
||||||
# # {call_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# #
|
|
||||||
# # operator_node = get_first_child_node(typed_nodes_map, call_node.id) # This is the 'the' expression
|
|
||||||
# # function_type_id = operator_node.type_id # Type of the 'the' expression is the function type
|
|
||||||
# #
|
|
||||||
# # # Get interned key for literal string 'bad_arg'
|
|
||||||
# # raw_actual_arg_type = %{type_kind: :literal, value: "bad_arg"}
|
|
||||||
# # {actual_arg_type_key, map_with_actual_arg} = Interner.get_or_intern_type(raw_actual_arg_type, typed_nodes_map)
|
|
||||||
# #
|
|
||||||
# # # Expected arg type is integer (from the (function integer string) annotation)
|
|
||||||
# # raw_expected_arg_type = Types.get_primitive_type(:integer)
|
|
||||||
# # {expected_arg_type_key, _final_map} = Interner.get_or_intern_type(raw_expected_arg_type, map_with_actual_arg)
|
|
||||||
# #
|
|
||||||
# # expected_error_raw =
|
|
||||||
# # type_error_argument_type_mismatch(
|
|
||||||
# # 0, # First argument
|
|
||||||
# # expected_arg_type_key,
|
|
||||||
# # actual_arg_type_key,
|
|
||||||
# # function_type_id
|
|
||||||
# # )
|
|
||||||
# # assert_node_typed_as(call_node, typed_nodes_map, expected_error_raw)
|
|
||||||
# # end
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
@ -1,332 +0,0 @@
|
|||||||
# defmodule Til.TypeFunctionTest do
|
|
||||||
# use ExUnit.Case, async: true
|
|
||||||
#
|
|
||||||
# alias Til.Typer.Types
|
|
||||||
# alias Til.Typer.Interner
|
|
||||||
#
|
|
||||||
# # Helper to create a raw function type definition
|
|
||||||
# defp type_function_raw(arg_type_defs, return_type_def, type_param_defs \\ []) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :function,
|
|
||||||
# arg_types: arg_type_defs,
|
|
||||||
# return_type: return_type_def,
|
|
||||||
# # Initially empty for monomorphic
|
|
||||||
# type_params: type_param_defs
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # Helper to create an interned function type definition
|
|
||||||
# # (what we expect after interning the raw definition's components)
|
|
||||||
# defp type_function_interned(id_key, arg_type_ids, return_type_id, type_param_ids \\ []) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :function,
|
|
||||||
# id: id_key,
|
|
||||||
# arg_types: arg_type_ids,
|
|
||||||
# return_type: return_type_id,
|
|
||||||
# type_params: type_param_ids
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "Phase 1: Function Type Representation & Interning" do
|
|
||||||
# test "interns a basic monomorphic function type (integer -> string)" do
|
|
||||||
# nodes_map = Interner.populate_known_types(%{})
|
|
||||||
#
|
|
||||||
# raw_integer_type = Types.get_primitive_type(:integer)
|
|
||||||
# raw_string_type = Types.get_primitive_type(:string)
|
|
||||||
#
|
|
||||||
# # Intern primitive types first to get their keys
|
|
||||||
# {integer_type_key, nodes_map_after_int} =
|
|
||||||
# Interner.get_or_intern_type(raw_integer_type, nodes_map)
|
|
||||||
#
|
|
||||||
# {string_type_key, nodes_map_after_str} =
|
|
||||||
# Interner.get_or_intern_type(raw_string_type, nodes_map_after_int)
|
|
||||||
#
|
|
||||||
# # Raw function type using the *definitions* of its components
|
|
||||||
# raw_func_type = type_function_raw([raw_integer_type], raw_string_type)
|
|
||||||
#
|
|
||||||
# # Intern the function type
|
|
||||||
# {func_type_key, final_nodes_map} =
|
|
||||||
# Interner.get_or_intern_type(raw_func_type, nodes_map_after_str)
|
|
||||||
#
|
|
||||||
# refute is_nil(func_type_key)
|
|
||||||
# assert func_type_key != integer_type_key
|
|
||||||
# assert func_type_key != string_type_key
|
|
||||||
#
|
|
||||||
# interned_func_def = Map.get(final_nodes_map, func_type_key)
|
|
||||||
#
|
|
||||||
# expected_interned_func_def =
|
|
||||||
# type_function_interned(
|
|
||||||
# func_type_key,
|
|
||||||
# # Expects keys of interned arg types
|
|
||||||
# [integer_type_key],
|
|
||||||
# # Expects key of interned return type
|
|
||||||
# string_type_key
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# assert interned_func_def == expected_interned_func_def
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "interning identical function type definitions yields the same key" do
|
|
||||||
# nodes_map = Interner.populate_known_types(%{})
|
|
||||||
#
|
|
||||||
# raw_integer_type = Types.get_primitive_type(:integer)
|
|
||||||
# raw_string_type = Types.get_primitive_type(:string)
|
|
||||||
# raw_atom_type = Types.get_primitive_type(:atom)
|
|
||||||
#
|
|
||||||
# # Intern components
|
|
||||||
# {_int_key, nodes_map} = Interner.get_or_intern_type(raw_integer_type, nodes_map)
|
|
||||||
# {_str_key, nodes_map} = Interner.get_or_intern_type(raw_string_type, nodes_map)
|
|
||||||
# {_atom_key, nodes_map} = Interner.get_or_intern_type(raw_atom_type, nodes_map)
|
|
||||||
#
|
|
||||||
# # Define two structurally identical raw function types
|
|
||||||
# raw_func_type_1 = type_function_raw([raw_integer_type, raw_atom_type], raw_string_type)
|
|
||||||
# raw_func_type_2 = type_function_raw([raw_integer_type, raw_atom_type], raw_string_type)
|
|
||||||
#
|
|
||||||
# {key1, nodes_map_after_1} = Interner.get_or_intern_type(raw_func_type_1, nodes_map)
|
|
||||||
# {key2, _final_nodes_map} = Interner.get_or_intern_type(raw_func_type_2, nodes_map_after_1)
|
|
||||||
#
|
|
||||||
# assert key1 == key2
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "interns a function type with no arguments (void -> atom)" do
|
|
||||||
# nodes_map = Interner.populate_known_types(%{})
|
|
||||||
# raw_atom_type = Types.get_primitive_type(:atom)
|
|
||||||
#
|
|
||||||
# {atom_type_key, nodes_map_after_atom} =
|
|
||||||
# Interner.get_or_intern_type(raw_atom_type, nodes_map)
|
|
||||||
#
|
|
||||||
# raw_func_type = type_function_raw([], raw_atom_type)
|
|
||||||
#
|
|
||||||
# {func_type_key, final_nodes_map} =
|
|
||||||
# Interner.get_or_intern_type(raw_func_type, nodes_map_after_atom)
|
|
||||||
#
|
|
||||||
# refute is_nil(func_type_key)
|
|
||||||
# interned_func_def = Map.get(final_nodes_map, func_type_key)
|
|
||||||
#
|
|
||||||
# expected_interned_func_def =
|
|
||||||
# type_function_interned(func_type_key, [], atom_type_key)
|
|
||||||
#
|
|
||||||
# assert interned_func_def == expected_interned_func_def
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "interns a function type whose argument is a complex (list) type" do
|
|
||||||
# nodes_map = Interner.populate_known_types(%{})
|
|
||||||
#
|
|
||||||
# raw_integer_type = Types.get_primitive_type(:integer)
|
|
||||||
# raw_string_type = Types.get_primitive_type(:string)
|
|
||||||
#
|
|
||||||
# # Intern primitive types
|
|
||||||
# # Prefixed integer_type_key as it's not used directly later
|
|
||||||
# {_integer_type_key, nodes_map_after_primitives} =
|
|
||||||
# Interner.get_or_intern_type(raw_integer_type, nodes_map)
|
|
||||||
# |> then(fn {key_param, map} ->
|
|
||||||
# {key_param, Interner.get_or_intern_type(raw_string_type, map) |> elem(1)}
|
|
||||||
# end)
|
|
||||||
#
|
|
||||||
# # Define a raw list type: (list integer)
|
|
||||||
# raw_list_of_int_type = %{
|
|
||||||
# type_kind: :list,
|
|
||||||
# # Use raw def here
|
|
||||||
# element_type: raw_integer_type,
|
|
||||||
# length: nil
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# # Intern the list type to get its key and canonical definition
|
|
||||||
# {list_of_int_key, nodes_map_after_list} =
|
|
||||||
# Interner.get_or_intern_type(raw_list_of_int_type, nodes_map_after_primitives)
|
|
||||||
#
|
|
||||||
# # Raw function type: ((list integer)) -> string
|
|
||||||
# # Its argument type is the *raw* list type definition
|
|
||||||
# raw_func_type = type_function_raw([raw_list_of_int_type], raw_string_type)
|
|
||||||
#
|
|
||||||
# {func_type_key, final_nodes_map} =
|
|
||||||
# Interner.get_or_intern_type(raw_func_type, nodes_map_after_list)
|
|
||||||
#
|
|
||||||
# refute is_nil(func_type_key)
|
|
||||||
# interned_func_def = Map.get(final_nodes_map, func_type_key)
|
|
||||||
#
|
|
||||||
# # The interned function type should refer to the *key* of the interned list type
|
|
||||||
# # and the *key* of the interned string type.
|
|
||||||
# string_type_key = Map.get(final_nodes_map, Types.primitive_type_key(:string)).id
|
|
||||||
#
|
|
||||||
# expected_interned_func_def =
|
|
||||||
# type_function_interned(func_type_key, [list_of_int_key], string_type_key)
|
|
||||||
#
|
|
||||||
# assert interned_func_def == expected_interned_func_def
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "Phase 2: Basic Lambdas (fn)" do
|
|
||||||
# alias Til.Parser
|
|
||||||
# # alias Til.Typer # Typer is used via typecheck_and_get_first_node from TestHelpers
|
|
||||||
# # Added for subtype checking
|
|
||||||
# alias Til.Typer.SubtypeChecker
|
|
||||||
# import Til.TestHelpers
|
|
||||||
#
|
|
||||||
# # --- Parsing Tests for fn ---
|
|
||||||
# test "parses a lambda with no arguments and one body expression: (fn () 1)" do
|
|
||||||
# source = "(fn () 1)"
|
|
||||||
# {lambda_node, nodes_map} = parse_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# assert lambda_node.ast_node_type == :lambda_expression
|
|
||||||
# # Check params_s_expr_id and its structure
|
|
||||||
# params_s_expr_node = get_node_by_id(nodes_map, lambda_node.params_s_expr_id)
|
|
||||||
# assert params_s_expr_node.ast_node_type == :s_expression
|
|
||||||
# # The () for parameters
|
|
||||||
# assert params_s_expr_node.children == []
|
|
||||||
#
|
|
||||||
# # Check arg_spec_node_ids
|
|
||||||
# assert lambda_node.arg_spec_node_ids == []
|
|
||||||
#
|
|
||||||
# # Check body_node_ids
|
|
||||||
# assert length(lambda_node.body_node_ids) == 1
|
|
||||||
# body_expr_node = get_node_by_id(nodes_map, hd(lambda_node.body_node_ids))
|
|
||||||
# assert body_expr_node.ast_node_type == :literal_integer
|
|
||||||
# assert body_expr_node.value == 1
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "parses a lambda with one argument: (fn (x) x)" do
|
|
||||||
# source = "(fn (x) x)"
|
|
||||||
# {lambda_node, nodes_map} = parse_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# assert lambda_node.ast_node_type == :lambda_expression
|
|
||||||
#
|
|
||||||
# params_s_expr_node = get_node_by_id(nodes_map, lambda_node.params_s_expr_id)
|
|
||||||
# assert params_s_expr_node.ast_node_type == :s_expression
|
|
||||||
# assert length(params_s_expr_node.children) == 1
|
|
||||||
#
|
|
||||||
# assert length(lambda_node.arg_spec_node_ids) == 1
|
|
||||||
# arg_spec_node = get_node_by_id(nodes_map, hd(lambda_node.arg_spec_node_ids))
|
|
||||||
# # Initially, arg_specs for lambdas are just symbols
|
|
||||||
# assert arg_spec_node.ast_node_type == :symbol
|
|
||||||
# assert arg_spec_node.name == "x"
|
|
||||||
#
|
|
||||||
# assert length(lambda_node.body_node_ids) == 1
|
|
||||||
# body_expr_node = get_node_by_id(nodes_map, hd(lambda_node.body_node_ids))
|
|
||||||
# assert body_expr_node.ast_node_type == :symbol
|
|
||||||
# assert body_expr_node.name == "x"
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "parses a lambda with multiple arguments and multiple body forms: (fn (a b) \"doc\" a)" do
|
|
||||||
# source = "(fn (a b) 'doc' a)"
|
|
||||||
# {lambda_node, nodes_map} = parse_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# assert lambda_node.ast_node_type == :lambda_expression
|
|
||||||
#
|
|
||||||
# params_s_expr_node = get_node_by_id(nodes_map, lambda_node.params_s_expr_id)
|
|
||||||
# # a, b
|
|
||||||
# assert length(params_s_expr_node.children) == 2
|
|
||||||
#
|
|
||||||
# assert length(lambda_node.arg_spec_node_ids) == 2
|
|
||||||
# arg1_node = get_node_by_id(nodes_map, Enum.at(lambda_node.arg_spec_node_ids, 0))
|
|
||||||
# arg2_node = get_node_by_id(nodes_map, Enum.at(lambda_node.arg_spec_node_ids, 1))
|
|
||||||
# assert arg1_node.name == "a"
|
|
||||||
# assert arg2_node.name == "b"
|
|
||||||
#
|
|
||||||
# assert length(lambda_node.body_node_ids) == 2
|
|
||||||
# body1_node = get_node_by_id(nodes_map, Enum.at(lambda_node.body_node_ids, 0))
|
|
||||||
# body2_node = get_node_by_id(nodes_map, Enum.at(lambda_node.body_node_ids, 1))
|
|
||||||
# assert body1_node.ast_node_type == :literal_string
|
|
||||||
# assert body1_node.value == "doc"
|
|
||||||
# assert body2_node.ast_node_type == :symbol
|
|
||||||
# assert body2_node.name == "a"
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # --- Typing Tests for fn ---
|
|
||||||
#
|
|
||||||
# test "types a lambda with no arguments: (fn () 1) as (-> integer)" do
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node("(fn () 1)")
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# assert func_type_def.arg_types == []
|
|
||||||
#
|
|
||||||
# # Check that the return type is a subtype of integer
|
|
||||||
# actual_return_type_def = Map.get(typed_nodes_map, func_type_def.return_type)
|
|
||||||
# expected_super_type_def = Map.get(typed_nodes_map, Types.primitive_type_key(:integer))
|
|
||||||
#
|
|
||||||
# assert SubtypeChecker.is_subtype?(
|
|
||||||
# actual_return_type_def,
|
|
||||||
# expected_super_type_def,
|
|
||||||
# typed_nodes_map
|
|
||||||
# ),
|
|
||||||
# "Expected return type #{inspect(actual_return_type_def)} to be a subtype of #{inspect(expected_super_type_def)}"
|
|
||||||
#
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a lambda with one argument: (fn (x) x) as (any -> any)" do
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node("(fn (x) x)")
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
#
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# assert func_type_def.arg_types == [Types.primitive_type_key(:any)]
|
|
||||||
# assert func_type_def.return_type == Types.primitive_type_key(:any)
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a lambda with multiple arguments: (fn (x y) y) as (any any -> any)" do
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node("(fn (x y) y)")
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
#
|
|
||||||
# any_key = Types.primitive_type_key(:any)
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# assert func_type_def.arg_types == [any_key, any_key]
|
|
||||||
# assert func_type_def.return_type == any_key
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a lambda with an empty body: (fn ()) as (-> nil)" do
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node("(fn ())")
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
#
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# assert func_type_def.arg_types == []
|
|
||||||
# assert func_type_def.return_type == Types.literal_type_key(:nil_atom)
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a lambda with multiple body forms: (fn () \"doc\" :foo) as (-> atom)" do
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node("(fn () 'doc' :foo)")
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
#
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# assert func_type_def.arg_types == []
|
|
||||||
# # The type of :foo is literal :foo, which is a subtype of primitive atom.
|
|
||||||
# # The inferred return type should be the specific literal type.
|
|
||||||
# # We need to get the key for the interned literal type :foo.
|
|
||||||
# # For simplicity in this test, we'll check against primitive atom,
|
|
||||||
# # assuming the typer might generalize or that literal atom is subtype of primitive atom.
|
|
||||||
# # A more precise test would check for the literal type of :foo.
|
|
||||||
# # Let's assume the typer returns the most specific type, the literal :foo.
|
|
||||||
# # To get its key, we'd intern it.
|
|
||||||
# raw_foo_literal_type = %{type_kind: :literal, value: :foo}
|
|
||||||
# {foo_literal_key, _} = Interner.get_or_intern_type(raw_foo_literal_type, typed_nodes_map)
|
|
||||||
#
|
|
||||||
# assert func_type_def.return_type == foo_literal_key
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a lambda where an argument is shadowed by a local binding (behavior check)" do
|
|
||||||
# # Source: (fn (x) (let ((x 1)) x))
|
|
||||||
# # Expected type: (any -> integer)
|
|
||||||
# # The inner x (integer) determines the return type.
|
|
||||||
# # The outer x (any) is the parameter type.
|
|
||||||
# # This test requires `let` to be implemented and typed correctly.
|
|
||||||
# # For now, we'll use an assignment which is simpler: (= x 1)
|
|
||||||
# source = "(fn (x) (= x 1) x)"
|
|
||||||
# {lambda_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# func_type_def = Map.get(typed_nodes_map, lambda_node.type_id)
|
|
||||||
#
|
|
||||||
# assert func_type_def.type_kind == :function
|
|
||||||
# # Param x is any
|
|
||||||
# assert func_type_def.arg_types == [Types.primitive_type_key(:any)]
|
|
||||||
#
|
|
||||||
# # The return type is the type of the final 'x', which is bound to 1 (integer)
|
|
||||||
# # Intern the literal 1 type to get its key
|
|
||||||
# raw_int_1_type = %{type_kind: :literal, value: 1}
|
|
||||||
# {int_1_key, _} = Interner.get_or_intern_type(raw_int_1_type, typed_nodes_map)
|
|
||||||
# assert func_type_def.return_type == int_1_key
|
|
||||||
# assert func_type_def.type_params == []
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
@ -1,253 +0,0 @@
|
|||||||
# defmodule Til.TypeListTest do
|
|
||||||
# use ExUnit.Case, async: true
|
|
||||||
#
|
|
||||||
# import Til.TestHelpers
|
|
||||||
# alias Til.Typer.Types
|
|
||||||
# alias Til.Typer.Interner
|
|
||||||
# alias Til.Typer.SubtypeChecker
|
|
||||||
#
|
|
||||||
# # Helper to create a primitive type definition (without :id)
|
|
||||||
# defp type_primitive_integer, do: Types.get_primitive_type(:integer)
|
|
||||||
# defp type_primitive_string, do: Types.get_primitive_type(:string)
|
|
||||||
# defp type_primitive_atom, do: Types.get_primitive_type(:atom)
|
|
||||||
# defp type_primitive_any, do: Types.get_primitive_type(:any)
|
|
||||||
# defp type_primitive_nothing, do: Types.get_primitive_type(:nothing)
|
|
||||||
#
|
|
||||||
# # Helper to create a list type definition (without :id, element_type is full def)
|
|
||||||
# # This is what the typer's infer_type_for_node_ast or resolve_type_specifier_node would produce
|
|
||||||
# # before interning resolves element_type to element_type_id.
|
|
||||||
# # For direct assertion against interned types, we'll need a different helper or to use keys.
|
|
||||||
# defp type_list_raw(element_type_def, length) do
|
|
||||||
# %{type_kind: :list, element_type: element_type_def, length: length}
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # Helper to create the expected *interned* list type definition (without :id)
|
|
||||||
# # `element_type_id` is the key of the interned element type.
|
|
||||||
# defp type_list_interned_form(element_type_id, length) do
|
|
||||||
# %{type_kind: :list, element_type_id: element_type_id, length: length}
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # Helper to create the expected *cleaned* type_annotation_mismatch error structure
|
|
||||||
# defp type_error_type_annotation_mismatch(actual_type_clean, expected_type_clean) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :error,
|
|
||||||
# reason: :type_annotation_mismatch,
|
|
||||||
# actual_type: actual_type_clean,
|
|
||||||
# expected_type: expected_type_clean
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "List Literal Type Inference" do
|
|
||||||
# test "empty list [] is (List Nothing) with length 0" do
|
|
||||||
# source = "[]"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # This is the "clean" structure that deep_strip_id produces
|
|
||||||
# expected_type_def = type_list_raw(type_primitive_nothing(), 0)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "list of distinct integer literals [1, 2, 3] is (List (Union (Literal 1) (Literal 2) (Literal 3))) with length 3" do
|
|
||||||
# source = "[1 2 3]"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# lit_1 = %{type_kind: :literal, value: 1}
|
|
||||||
# lit_2 = %{type_kind: :literal, value: 2}
|
|
||||||
# lit_3 = %{type_kind: :literal, value: 3}
|
|
||||||
# union_type = %{type_kind: :union, types: MapSet.new([lit_1, lit_2, lit_3])}
|
|
||||||
# expected_type_def = type_list_raw(union_type, 3)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "list of distinct string literals ['a' 'b'] is (List (Union (Literal \"a\") (Literal \"b\"))) with length 2" do
|
|
||||||
# # Using single quotes for strings as per parser
|
|
||||||
# source = "['a' 'b']"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# lit_a = %{type_kind: :literal, value: "a"}
|
|
||||||
# lit_b = %{type_kind: :literal, value: "b"}
|
|
||||||
# union_type = %{type_kind: :union, types: MapSet.new([lit_a, lit_b])}
|
|
||||||
# expected_type_def = type_list_raw(union_type, 2)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "list of distinct atom literals [:foo :bar] is (List (Union (Literal :foo) (Literal :bar))) with length 2" do
|
|
||||||
# source = "[:foo :bar]"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# lit_foo = %{type_kind: :literal, value: :foo}
|
|
||||||
# lit_bar = %{type_kind: :literal, value: :bar}
|
|
||||||
# union_type = %{type_kind: :union, types: MapSet.new([lit_foo, lit_bar])}
|
|
||||||
# expected_type_def = type_list_raw(union_type, 2)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "list of mixed literals [1 'a'] has type (List (Union (Literal 1) (Literal \"a\"))) length 2" do
|
|
||||||
# source = "[1 'a']"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # Define the clean union type for the elements.
|
|
||||||
# clean_literal_1_def = %{type_kind: :literal, value: 1}
|
|
||||||
# # Parser turns 'a' into "a"
|
|
||||||
# clean_literal_a_def = %{type_kind: :literal, value: "a"}
|
|
||||||
#
|
|
||||||
# clean_element_union_type_def = %{
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: MapSet.new([clean_literal_1_def, clean_literal_a_def])
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# expected_list_type_clean = type_list_raw(clean_element_union_type_def, 2)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_list_type_clean)
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "List Type Annotation `(the (list <type>) ...)`" do
|
|
||||||
# test "(the (list integer) []) has type (List Integer) with length nil" do
|
|
||||||
# source = "(the (list integer) [])"
|
|
||||||
# # This gets the 'the' node
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# expected_type_def = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (list string) ['a' 'b']) has type (List String) with length nil" do
|
|
||||||
# source = "(the (list string) ['a' 'b'])"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
# # inspect_nodes(typed_nodes_map)
|
|
||||||
# expected_type_def = type_list_raw(type_primitive_string(), nil)
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_type_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (list integer) [1 'a']) results in type_annotation_mismatch" do
|
|
||||||
# source = "(the (list integer) [1 'a'])"
|
|
||||||
# {node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # Expected actual type: (List (Union (Literal 1) (Literal "a"))) length 2
|
|
||||||
# lit_1_clean = %{type_kind: :literal, value: 1}
|
|
||||||
# lit_a_str_clean = %{type_kind: :literal, value: "a"}
|
|
||||||
# union_elements_clean = %{
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: MapSet.new([lit_1_clean, lit_a_str_clean])
|
|
||||||
# }
|
|
||||||
# expected_actual_clean_type = type_list_raw(union_elements_clean, 2)
|
|
||||||
#
|
|
||||||
# # Expected annotated type: (List Integer) length nil
|
|
||||||
# expected_annotated_clean_type = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
#
|
|
||||||
# expected_error_def =
|
|
||||||
# type_error_type_annotation_mismatch(
|
|
||||||
# expected_actual_clean_type,
|
|
||||||
# expected_annotated_clean_type
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# assert_node_typed_as(node, typed_nodes_map, expected_error_def)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (list (list integer)) [[] [1]]) has type (List (List Integer)) with length nil" do
|
|
||||||
# source = "(the (list (list integer)) [[] [1]])"
|
|
||||||
# {the_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # Expected inner list type: (List Integer), length nil (from annotation (list integer))
|
|
||||||
# # This needs to be interned to get its key.
|
|
||||||
# # Manually construct the expected structure for assertion.
|
|
||||||
# # 1. Define the raw inner list type: (List Integer, nil)
|
|
||||||
# # `type_primitive_integer()` gets the base definition of Integer.
|
|
||||||
# # `typed_nodes_map` (from typechecking the source) will contain the interned primitive types.
|
|
||||||
# clean_inner_list_type = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
#
|
|
||||||
# # Expected outer list type: (List <clean_inner_list_type>), length nil
|
|
||||||
# expected_outer_list_type_def = type_list_raw(clean_inner_list_type, nil)
|
|
||||||
#
|
|
||||||
# assert_node_typed_as(the_node, typed_nodes_map, expected_outer_list_type_def)
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "List Subtyping" do
|
|
||||||
# # Helper to check subtyping
|
|
||||||
# defp check_subtype(subtype_raw, supertype_raw, expected_result) do
|
|
||||||
# # Populate a base nodes_map with primitive types
|
|
||||||
# nodes_map = Interner.populate_known_types(%{})
|
|
||||||
#
|
|
||||||
# # Intern subtype and supertype to get their canonical forms and keys
|
|
||||||
# {subtype_key, nodes_map_after_sub} = Interner.get_or_intern_type(subtype_raw, nodes_map)
|
|
||||||
#
|
|
||||||
# {supertype_key, final_nodes_map} =
|
|
||||||
# Interner.get_or_intern_type(supertype_raw, nodes_map_after_sub)
|
|
||||||
#
|
|
||||||
# subtype_def = Map.get(final_nodes_map, subtype_key)
|
|
||||||
# supertype_def = Map.get(final_nodes_map, supertype_key)
|
|
||||||
#
|
|
||||||
# assert SubtypeChecker.is_subtype?(subtype_def, supertype_def, final_nodes_map) ==
|
|
||||||
# expected_result
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is subtype of (List Integer, 3)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# check_subtype(list_int_3, list_int_3, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is subtype of (List Integer, nil)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# list_int_nil = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
# check_subtype(list_int_3, list_int_nil, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is subtype of (List Number, 3)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# list_num_3 = type_list_raw(Types.get_primitive_type(:number), 3)
|
|
||||||
# check_subtype(list_int_3, list_num_3, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is subtype of (List Number, nil)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# list_num_nil = type_list_raw(Types.get_primitive_type(:number), nil)
|
|
||||||
# check_subtype(list_int_3, list_num_nil, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is NOT subtype of (List Integer, 2)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# list_int_2 = type_list_raw(type_primitive_integer(), 2)
|
|
||||||
# check_subtype(list_int_3, list_int_2, false)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, 3) is NOT subtype of (List String, 3)" do
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# list_str_3 = type_list_raw(type_primitive_string(), 3)
|
|
||||||
# check_subtype(list_int_3, list_str_3, false)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Integer, nil) is NOT subtype of (List Integer, 3)" do
|
|
||||||
# list_int_nil = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
# list_int_3 = type_list_raw(type_primitive_integer(), 3)
|
|
||||||
# check_subtype(list_int_nil, list_int_3, false)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Nothing, 0) is subtype of (List Integer, nil) (empty list compatibility)" do
|
|
||||||
# # Type of [] is (List Nothing, 0)
|
|
||||||
# empty_list_type = type_list_raw(type_primitive_nothing(), 0)
|
|
||||||
# # Target type (e.g. from an annotation (list integer))
|
|
||||||
# list_int_nil = type_list_raw(type_primitive_integer(), nil)
|
|
||||||
# check_subtype(empty_list_type, list_int_nil, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Nothing, 0) is subtype of (List Any, nil)" do
|
|
||||||
# empty_list_type = type_list_raw(type_primitive_nothing(), 0)
|
|
||||||
# list_any_nil = type_list_raw(type_primitive_any(), nil)
|
|
||||||
# check_subtype(empty_list_type, list_any_nil, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Nothing, 0) is subtype of (List Nothing, nil)" do
|
|
||||||
# empty_list_type = type_list_raw(type_primitive_nothing(), 0)
|
|
||||||
# list_nothing_nil = type_list_raw(type_primitive_nothing(), nil)
|
|
||||||
# check_subtype(empty_list_type, list_nothing_nil, true)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(List Nothing, 0) is NOT subtype of (List Integer, 0) unless Nothing is subtype of Integer" do
|
|
||||||
# # This depends on whether Nothing is a subtype of Integer. It is.
|
|
||||||
# empty_list_type = type_list_raw(type_primitive_nothing(), 0)
|
|
||||||
# list_int_0 = type_list_raw(type_primitive_integer(), 0)
|
|
||||||
# check_subtype(empty_list_type, list_int_0, true)
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
@ -1,533 +0,0 @@
|
|||||||
defmodule Til.TypeMapTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.Typer
|
|
||||||
alias Til.Typer.Types
|
|
||||||
alias Til.Typer.Interner
|
|
||||||
alias Til.Typer.SubtypeChecker
|
|
||||||
import Til.TestHelpers
|
|
||||||
|
|
||||||
# Helper to create a raw map type definition for assertions/setup
|
|
||||||
defp type_map_raw(known_elements_raw, index_signature_raw) do
|
|
||||||
%{
|
|
||||||
type_kind: :map,
|
|
||||||
known_elements: known_elements_raw,
|
|
||||||
index_signature: index_signature_raw
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Unused helper removed.
|
|
||||||
# defp type_map_interned_form(known_elements_interned, index_signature_interned) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :map,
|
|
||||||
# # :id field is omitted as it's dynamic
|
|
||||||
# known_elements: known_elements_interned,
|
|
||||||
# index_signature: index_signature_interned
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
|
|
||||||
defp type_primitive_any, do: Types.get_primitive_type(:any)
|
|
||||||
defp type_primitive_integer, do: Types.get_primitive_type(:integer)
|
|
||||||
defp type_primitive_string, do: Types.get_primitive_type(:string)
|
|
||||||
defp type_primitive_atom, do: Types.get_primitive_type(:atom)
|
|
||||||
defp type_primitive_number, do: Types.get_primitive_type(:number)
|
|
||||||
defp type_literal_int(val), do: %{type_kind: :literal, value: val}
|
|
||||||
defp type_literal_string(val), do: %{type_kind: :literal, value: val}
|
|
||||||
defp type_literal_atom(val), do: %{type_kind: :literal, value: val}
|
|
||||||
|
|
||||||
# Helper to create the expected *cleaned* type_annotation_mismatch error structure
|
|
||||||
defp type_error_type_annotation_mismatch(actual_type_clean, expected_type_clean) do
|
|
||||||
%{
|
|
||||||
type_kind: :error,
|
|
||||||
reason: :type_annotation_mismatch,
|
|
||||||
actual_type: actual_type_clean,
|
|
||||||
expected_type: expected_type_clean
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Map Literal Typing" do
|
|
||||||
test "empty map m{} is typed correctly" do
|
|
||||||
source = "m{}"
|
|
||||||
{map_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
expected_raw_type =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
|
|
||||||
assert_node_typed_as(map_node, typed_nodes_map, expected_raw_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map with literal keys and values m{:a 1 :b 's'}" do
|
|
||||||
source = "m{:a 1 :b 's'}"
|
|
||||||
{map_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
expected_raw_type =
|
|
||||||
type_map_raw(
|
|
||||||
%{
|
|
||||||
a: %{value_type: type_literal_int(1), optional: false},
|
|
||||||
b: %{value_type: type_literal_string("s"), optional: false}
|
|
||||||
},
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert_node_typed_as(map_node, typed_nodes_map, expected_raw_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map with duplicate literal key (last one wins)" do
|
|
||||||
source = "m{:a 1 :a 2}"
|
|
||||||
{map_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
expected_raw_type =
|
|
||||||
type_map_raw(
|
|
||||||
%{a: %{value_type: type_literal_int(2), optional: false}},
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert_node_typed_as(map_node, typed_nodes_map, expected_raw_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map with a symbol key that resolves to a literal atom" do
|
|
||||||
source = """
|
|
||||||
(= my-key :c)
|
|
||||||
m{:a 1 my-key 2}
|
|
||||||
"""
|
|
||||||
|
|
||||||
# We are interested in the type of the map expression, which is the second node.
|
|
||||||
{:ok, parsed_map} = Parser.parse(source)
|
|
||||||
{:ok, typed_nodes_map} = Typer.type_check(parsed_map)
|
|
||||||
# 0 is assignment node, 1 is the map node
|
|
||||||
map_node = get_nth_child_node(typed_nodes_map, 1)
|
|
||||||
|
|
||||||
expected_raw_type =
|
|
||||||
type_map_raw(
|
|
||||||
# my-key is typed as (literal :c), so :c is used in known_elements.
|
|
||||||
%{
|
|
||||||
a: %{value_type: type_literal_int(1), optional: false},
|
|
||||||
c: %{value_type: type_literal_int(2), optional: false}
|
|
||||||
},
|
|
||||||
# Index signature remains default for map literals.
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert_node_typed_as(map_node, typed_nodes_map, expected_raw_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map with various literal keys (true, false, nil, integer, string, atom)" do
|
|
||||||
source = "m{true 1 false 2 nil 3 4 'four' :five 5}"
|
|
||||||
{map_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
expected_raw_type =
|
|
||||||
type_map_raw(
|
|
||||||
%{
|
|
||||||
true => %{value_type: type_literal_int(1), optional: false},
|
|
||||||
false => %{value_type: type_literal_int(2), optional: false},
|
|
||||||
nil => %{value_type: type_literal_int(3), optional: false},
|
|
||||||
4 => %{value_type: type_literal_string("four"), optional: false},
|
|
||||||
# Note: parser turns :five into a literal atom node,
|
|
||||||
# typer infers its type as %{type_kind: :literal, value: :five}
|
|
||||||
# The key in known_elements should be the atom :five itself.
|
|
||||||
five: %{value_type: type_literal_int(5), optional: false}
|
|
||||||
},
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert_node_typed_as(map_node, typed_nodes_map, expected_raw_type)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Map Type Annotation Resolution" do
|
|
||||||
test "(map atom integer) annotation" do
|
|
||||||
source = "(the (map atom integer) m{})"
|
|
||||||
{the_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
# m{}'s default index signature (any -> any) is not a subtype of (atom -> integer)
|
|
||||||
# because 'any' (value) is not a subtype of 'integer'.
|
|
||||||
# Thus, the 'the' expression should result in a type annotation mismatch error.
|
|
||||||
|
|
||||||
# Actual type of m{} (cleaned)
|
|
||||||
actual_m_empty_clean =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
|
|
||||||
# Expected annotated type (cleaned)
|
|
||||||
expected_annotated_clean =
|
|
||||||
type_map_raw(
|
|
||||||
%{},
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_integer()}
|
|
||||||
)
|
|
||||||
|
|
||||||
expected_error_def =
|
|
||||||
type_error_type_annotation_mismatch(actual_m_empty_clean, expected_annotated_clean)
|
|
||||||
|
|
||||||
assert_node_typed_as(the_node, typed_nodes_map, expected_error_def)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "(map (list string) (map string any)) annotation" do
|
|
||||||
source = "(the (map (list string) (map string any)) m{})"
|
|
||||||
{the_node, typed_nodes_map} = typecheck_and_get_first_node(source)
|
|
||||||
|
|
||||||
# Similar to the above, m{}'s default index signature (any -> any)
|
|
||||||
# will likely fail the subtyping check against the complex map value type in the annotation.
|
|
||||||
|
|
||||||
# Actual type of m{} (cleaned)
|
|
||||||
actual_m_empty_clean =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
|
|
||||||
# Expected annotated type (cleaned)
|
|
||||||
# Key type: (List String)
|
|
||||||
key_type_annotated_clean = %{
|
|
||||||
type_kind: :list,
|
|
||||||
element_type: type_primitive_string(),
|
|
||||||
length: nil
|
|
||||||
}
|
|
||||||
|
|
||||||
# Value type: (Map String Any)
|
|
||||||
value_type_annotated_clean =
|
|
||||||
type_map_raw(
|
|
||||||
%{},
|
|
||||||
%{key_type: type_primitive_string(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
expected_annotated_clean =
|
|
||||||
type_map_raw(
|
|
||||||
%{},
|
|
||||||
%{key_type: key_type_annotated_clean, value_type: value_type_annotated_clean}
|
|
||||||
)
|
|
||||||
|
|
||||||
expected_error_def =
|
|
||||||
type_error_type_annotation_mismatch(actual_m_empty_clean, expected_annotated_clean)
|
|
||||||
|
|
||||||
assert_node_typed_as(the_node, typed_nodes_map, expected_error_def)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Map Type Interning" do
|
|
||||||
test "identical map type definitions intern to the same key" do
|
|
||||||
nodes_map = Interner.populate_known_types(%{})
|
|
||||||
|
|
||||||
raw_map_def1 =
|
|
||||||
type_map_raw(
|
|
||||||
%{a: %{value_type: type_literal_int(1), optional: false}},
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
raw_map_def2 =
|
|
||||||
type_map_raw(
|
|
||||||
%{a: %{value_type: type_literal_int(1), optional: false}},
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
{key1, nodes_map_after_1} = Interner.get_or_intern_type(raw_map_def1, nodes_map)
|
|
||||||
{key2, _nodes_map_after_2} = Interner.get_or_intern_type(raw_map_def2, nodes_map_after_1)
|
|
||||||
|
|
||||||
assert key1 == key2
|
|
||||||
end
|
|
||||||
|
|
||||||
test "structurally different map type definitions intern to different keys" do
|
|
||||||
nodes_map = Interner.populate_known_types(%{})
|
|
||||||
|
|
||||||
raw_map_def1 =
|
|
||||||
type_map_raw(
|
|
||||||
# key :a
|
|
||||||
%{a: %{value_type: type_literal_int(1), optional: false}},
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
raw_map_def2 =
|
|
||||||
type_map_raw(
|
|
||||||
# key :b
|
|
||||||
%{b: %{value_type: type_literal_int(1), optional: false}},
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
{key1, nodes_map_after_1} = Interner.get_or_intern_type(raw_map_def1, nodes_map)
|
|
||||||
{key2, _nodes_map_after_2} = Interner.get_or_intern_type(raw_map_def2, nodes_map_after_1)
|
|
||||||
|
|
||||||
refute key1 == key2
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Map Subtyping" do
|
|
||||||
# Helper to check subtyping for maps
|
|
||||||
defp check_map_subtype(subtype_raw, supertype_raw, expected_result) do
|
|
||||||
nodes_map = Interner.populate_known_types(%{})
|
|
||||||
{subtype_key, nodes_map_after_sub} = Interner.get_or_intern_type(subtype_raw, nodes_map)
|
|
||||||
|
|
||||||
{supertype_key, final_nodes_map} =
|
|
||||||
Interner.get_or_intern_type(supertype_raw, nodes_map_after_sub)
|
|
||||||
|
|
||||||
subtype_def = Map.get(final_nodes_map, subtype_key)
|
|
||||||
supertype_def = Map.get(final_nodes_map, supertype_key)
|
|
||||||
|
|
||||||
assert SubtypeChecker.is_subtype?(subtype_def, supertype_def, final_nodes_map) ==
|
|
||||||
expected_result
|
|
||||||
end
|
|
||||||
|
|
||||||
# Test cases based on todo.md logic for map subtyping
|
|
||||||
# 1. Known Elements (Required in Supertype)
|
|
||||||
test "subtype has required key with correct type" do
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{a: %{value_type: type_literal_int(1), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "subtype missing required key" do
|
|
||||||
sub = type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "subtype has required key but wrong type" do
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{a: %{value_type: type_literal_string("s"), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "subtype has required key as optional" do
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{a: %{value_type: type_literal_int(1), optional: true}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
# 2. Known Elements (Optional in Supertype)
|
|
||||||
test "subtype has optional key with correct type" do
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{a: %{value_type: type_literal_int(1), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: true}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "subtype missing optional key" do
|
|
||||||
sub = type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{a: %{value_type: type_primitive_integer(), optional: true}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
# 3. Index Signature Compatibility
|
|
||||||
test "compatible index signatures" do
|
|
||||||
# Ksuper <: Ksub (contravariant), Vsub <: Vsuper (covariant)
|
|
||||||
# Ksuper=atom, Ksub=any. Vsub=int, Vsuper=number
|
|
||||||
sub = type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_literal_int(1)})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_atom(), value_type: type_primitive_number()})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "incompatible index signature (key type wrong variance)" do
|
|
||||||
# Ksuper=any, Ksub=atom (any is not subtype of atom)
|
|
||||||
sub = type_map_raw(%{}, %{key_type: type_primitive_atom(), value_type: type_literal_int(1)})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_number()})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "incompatible index signature (value type wrong variance)" do
|
|
||||||
# Vsub=number, Vsuper=int (number is not subtype of int)
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_number()})
|
|
||||||
|
|
||||||
sup = type_map_raw(%{}, %{key_type: type_primitive_atom(), value_type: type_literal_int(1)})
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
# 4. Width Subtyping (Extra keys in subtype conform to supertype's index signature)
|
|
||||||
test "extra key in subtype conforms to supertype index signature" do
|
|
||||||
# Subtype has :b (atom -> int(10)).
|
|
||||||
# Supertype index signature is atom -> number.
|
|
||||||
# Subtype index signature is any -> int.
|
|
||||||
# Check:
|
|
||||||
# 1. Required/Optional super keys: Super has no known keys, so true.
|
|
||||||
# 2. Index Signature Compatibility:
|
|
||||||
# K_super(atom) <: K_sub(any) -> true (contravariance)
|
|
||||||
# V_sub(int) <: V_super(number) -> true (covariance)
|
|
||||||
# So, index signatures are compatible.
|
|
||||||
# 3. Width: Extra key :b (atom) with value int(10) in subtype.
|
|
||||||
# Must conform to super_is (atom -> number).
|
|
||||||
# Key :b (atom) <: super_is.key_type (atom) -> true.
|
|
||||||
# Value int(10) <: super_is.value_type (number) -> true.
|
|
||||||
# All conditions should pass.
|
|
||||||
sub =
|
|
||||||
type_map_raw(
|
|
||||||
# Extra key
|
|
||||||
%{b: %{value_type: type_literal_int(10), optional: false}},
|
|
||||||
# Sub IS
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_integer()}
|
|
||||||
)
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(
|
|
||||||
# No known keys
|
|
||||||
%{},
|
|
||||||
# Super IS
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_number()}
|
|
||||||
)
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "extra key in subtype, key type does not conform to supertype index signature" do
|
|
||||||
# Subtype has "b" (string key), supertype index expects atom keys
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{"b" => %{value_type: type_literal_int(10), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_atom(), value_type: type_primitive_integer()})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "extra key in subtype, value type does not conform to supertype index signature" do
|
|
||||||
# Subtype has :b -> "s" (string value), supertype index expects integer values
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{b: %{value_type: type_literal_string("s"), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup =
|
|
||||||
type_map_raw(%{}, %{key_type: type_primitive_atom(), value_type: type_primitive_integer()})
|
|
||||||
|
|
||||||
check_map_subtype(sub, sup, false)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "map is subtype of any" do
|
|
||||||
sub =
|
|
||||||
type_map_raw(%{a: %{value_type: type_literal_int(1), optional: false}}, %{
|
|
||||||
key_type: type_primitive_any(),
|
|
||||||
value_type: type_primitive_any()
|
|
||||||
})
|
|
||||||
|
|
||||||
sup = type_primitive_any()
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "nothing is subtype of map" do
|
|
||||||
# Raw form
|
|
||||||
sub = Types.get_primitive_type(:nothing)
|
|
||||||
sup = type_map_raw(%{}, %{key_type: type_primitive_any(), value_type: type_primitive_any()})
|
|
||||||
check_map_subtype(sub, sup, true)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "complex map subtyping: required, optional, index, and width" do
|
|
||||||
sub_map =
|
|
||||||
type_map_raw(
|
|
||||||
%{
|
|
||||||
# Matches required
|
|
||||||
req_present: %{value_type: type_literal_int(1), optional: false},
|
|
||||||
# Matches optional
|
|
||||||
opt_present: %{value_type: type_literal_string("sub"), optional: false},
|
|
||||||
# Width, must match super index
|
|
||||||
extra_key: %{value_type: type_literal_atom(:sub_atom), optional: false}
|
|
||||||
},
|
|
||||||
# Sub index (Ksuper <: Ksub)
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
)
|
|
||||||
|
|
||||||
super_map =
|
|
||||||
type_map_raw(
|
|
||||||
%{
|
|
||||||
req_present: %{value_type: type_primitive_integer(), optional: false},
|
|
||||||
opt_present: %{value_type: type_primitive_string(), optional: true},
|
|
||||||
opt_absent: %{value_type: type_primitive_atom(), optional: true}
|
|
||||||
},
|
|
||||||
# Super index (Vsub <: Vsuper)
|
|
||||||
%{key_type: type_primitive_atom(), value_type: type_primitive_atom()}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Sub index key: any, value: any
|
|
||||||
# Super index key: atom, value: atom
|
|
||||||
# Ksuper (atom) <: Ksub (any) -> true
|
|
||||||
# Vsub (any) <: Vsuper (atom) -> false. This should make it false.
|
|
||||||
# Let's adjust sub index to make it pass:
|
|
||||||
# Vsub (literal :sub_atom) <: Vsuper (atom) -> true
|
|
||||||
# extra_key (:extra_key, type literal :sub_atom) must conform to super_is (atom -> atom)
|
|
||||||
# :extra_key (atom) <: super_is.key (atom) -> true
|
|
||||||
# literal :sub_atom <: super_is.value (atom) -> true
|
|
||||||
|
|
||||||
# Corrected sub_map for a true case:
|
|
||||||
sub_map_corrected =
|
|
||||||
type_map_raw(
|
|
||||||
%{
|
|
||||||
# val_type: int(1) <: integer (super) -> true
|
|
||||||
req_present: %{value_type: type_literal_int(1), optional: false},
|
|
||||||
# val_type: string("sub") <: string (super) -> true
|
|
||||||
opt_present: %{value_type: type_literal_string("sub"), optional: false},
|
|
||||||
# extra key, val_type: atom(:sub_atom)
|
|
||||||
extra_key: %{value_type: type_literal_atom(:sub_atom), optional: false}
|
|
||||||
},
|
|
||||||
# Sub index signature:
|
|
||||||
%{key_type: type_primitive_any(), value_type: type_literal_atom(:another_sub_atom)}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Super index signature: %{key_type: type_primitive_atom(), value_type: type_primitive_atom()}
|
|
||||||
|
|
||||||
# Index Sig Check: Ksuper(atom) <: Ksub(any) -> true
|
|
||||||
# Vsub(literal :another_sub_atom) <: Vsuper(atom) -> true
|
|
||||||
|
|
||||||
# Width Check for :extra_key (atom) with value (literal :sub_atom):
|
|
||||||
# key :extra_key (atom) <: super_is.key_type (atom) -> true
|
|
||||||
# value (literal :sub_atom) <: super_is.value_type (atom) -> true
|
|
||||||
check_map_subtype(sub_map_corrected, super_map, true)
|
|
||||||
|
|
||||||
# Original sub_map that should fail due to index signature value type Vsub(any) not <: Vsuper(atom)
|
|
||||||
check_map_subtype(sub_map, super_map, false)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,92 +0,0 @@
|
|||||||
defmodule Til.TypeTupleTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Til.Parser
|
|
||||||
alias Til.Typer
|
|
||||||
alias Til.Typer.Types
|
|
||||||
alias Til.TestHelpers # Added alias
|
|
||||||
|
|
||||||
test "empty tuple {}" do
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node("{}")
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
expected_type = %{type_kind: :tuple, element_types: []}
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "tuple with one integer {1}" do
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node("{1}")
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
|
|
||||||
expected_type = %{
|
|
||||||
type_kind: :tuple,
|
|
||||||
element_types: [%{type_kind: :literal, value: 1}]
|
|
||||||
}
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "tuple with integer and string {42 \"hi\"}" do
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node("{42 'hi'}")
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
|
|
||||||
expected_type = %{
|
|
||||||
type_kind: :tuple,
|
|
||||||
element_types: [
|
|
||||||
%{type_kind: :literal, value: 42},
|
|
||||||
%{type_kind: :literal, value: "hi"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "tuple with nil and boolean {nil true}" do
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node("{nil true}")
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
|
|
||||||
expected_type = %{
|
|
||||||
type_kind: :tuple,
|
|
||||||
element_types: [
|
|
||||||
Types.get_literal_type(:nil_atom), # This specific type structure comes from Types module
|
|
||||||
Types.get_literal_type(:true_atom) # This specific type structure comes from Types module
|
|
||||||
]
|
|
||||||
}
|
|
||||||
# Note: deep_strip_id in TestHelpers will handle the :id field if present in Types.get_literal_type results
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "nested tuple {{1} 'a'}" do
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node("{{1} 'a'}")
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
|
|
||||||
expected_type = %{
|
|
||||||
type_kind: :tuple,
|
|
||||||
element_types: [
|
|
||||||
%{type_kind: :tuple, element_types: [%{type_kind: :literal, value: 1}]},
|
|
||||||
%{type_kind: :literal, value: "a"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "tuple with a typed symbol after assignment" do
|
|
||||||
# Two top-level expressions
|
|
||||||
source = "(= x 10) {x 'str'}"
|
|
||||||
# Get the second top-level expression (the tuple)
|
|
||||||
{tuple_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
|
|
||||||
assert tuple_node.ast_node_type == :tuple_expression
|
|
||||||
|
|
||||||
expected_type_clean = %{
|
|
||||||
type_kind: :tuple,
|
|
||||||
element_types: [
|
|
||||||
# Type of x is literal 10
|
|
||||||
%{type_kind: :literal, value: 10},
|
|
||||||
%{type_kind: :literal, value: "str"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
TestHelpers.assert_node_typed_as(tuple_node, typed_nodes_map, expected_type_clean)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,264 +0,0 @@
|
|||||||
# defmodule Til.TypeUnionTest do
|
|
||||||
# use ExUnit.Case, async: true
|
|
||||||
#
|
|
||||||
# alias Til.Typer.Types
|
|
||||||
# alias Til.TestHelpers
|
|
||||||
# alias MapSet
|
|
||||||
#
|
|
||||||
# # --- Predefined Type Definitions for Assertions (Raw Forms) ---
|
|
||||||
# defp type_primitive_integer, do: Types.get_primitive_type(:integer)
|
|
||||||
# defp type_primitive_string, do: Types.get_primitive_type(:string)
|
|
||||||
# defp type_primitive_atom, do: Types.get_primitive_type(:atom)
|
|
||||||
# defp type_primitive_any, do: Types.get_primitive_type(:any)
|
|
||||||
# defp type_primitive_nothing, do: Types.get_primitive_type(:nothing)
|
|
||||||
#
|
|
||||||
# defp type_literal_int(val), do: %{type_kind: :literal, value: val}
|
|
||||||
# defp type_literal_string(val), do: %{type_kind: :literal, value: val}
|
|
||||||
# defp type_literal_atom(val), do: %{type_kind: :literal, value: val}
|
|
||||||
#
|
|
||||||
# defp type_list_raw(element_type_def, length \\ nil) do
|
|
||||||
# %{type_kind: :list, element_type: element_type_def, length: length}
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# defp type_map_raw(known_elements_raw, index_signature_raw) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :map,
|
|
||||||
# known_elements: known_elements_raw,
|
|
||||||
# index_signature: index_signature_raw
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# defp type_union_raw(type_defs_list) do
|
|
||||||
# %{type_kind: :union, types: MapSet.new(type_defs_list)}
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # Helper to create the expected *cleaned* type_annotation_mismatch error structure
|
|
||||||
# defp type_error_type_annotation_mismatch(actual_type_clean, expected_type_clean) do
|
|
||||||
# %{
|
|
||||||
# type_kind: :error,
|
|
||||||
# reason: :type_annotation_mismatch,
|
|
||||||
# actual_type: actual_type_clean,
|
|
||||||
# expected_type: expected_type_clean
|
|
||||||
# }
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# describe "(union ...) type specifier resolution and usage in (the ...)" do
|
|
||||||
# test "(the (union integer string) 42) - integer matches" do
|
|
||||||
# source = "(the (union integer string) 42)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# expected_annotation_type =
|
|
||||||
# type_union_raw([type_primitive_integer(), type_primitive_string()])
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
#
|
|
||||||
# # Inner literal
|
|
||||||
# literal_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, the_node.id)
|
|
||||||
# TestHelpers.assert_node_typed_as(literal_node, typed_nodes_map, type_literal_int(42))
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union integer string) \"hello\") - string matches" do
|
|
||||||
# source = "(the (union integer string) 'hello')"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# expected_annotation_type =
|
|
||||||
# type_union_raw([type_primitive_integer(), type_primitive_string()])
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
#
|
|
||||||
# # Inner literal
|
|
||||||
# literal_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, the_node.id)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(
|
|
||||||
# literal_node,
|
|
||||||
# typed_nodes_map,
|
|
||||||
# type_literal_string("hello")
|
|
||||||
# )
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union integer string) :some_atom) - type mismatch" do
|
|
||||||
# source = "(the (union integer string) :some_atom)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# actual_type = type_literal_atom(:some_atom)
|
|
||||||
#
|
|
||||||
# expected_annotated_type =
|
|
||||||
# type_union_raw([type_primitive_integer(), type_primitive_string()])
|
|
||||||
#
|
|
||||||
# expected_error =
|
|
||||||
# type_error_type_annotation_mismatch(actual_type, expected_annotated_type)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_error)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union integer) 42) - single member union resolves to member type" do
|
|
||||||
# source = "(the (union integer) 42)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # The annotation (union integer) should resolve to just integer.
|
|
||||||
# # So, the 'the' expression's type is integer.
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, type_primitive_integer())
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union integer) \"s\") - single member union type mismatch" do
|
|
||||||
# source = "(the (union integer) 's')"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# actual_type = type_literal_string("s")
|
|
||||||
# # (union integer) resolves to integer
|
|
||||||
# expected_annotated_type = type_primitive_integer()
|
|
||||||
#
|
|
||||||
# expected_error =
|
|
||||||
# type_error_type_annotation_mismatch(actual_type, expected_annotated_type)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_error)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union) 42) - empty union resolves to nothing, causes mismatch" do
|
|
||||||
# source = "(the (union) 42)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# actual_type = type_literal_int(42)
|
|
||||||
# # (union) resolves to nothing
|
|
||||||
# expected_annotated_type = type_primitive_nothing()
|
|
||||||
#
|
|
||||||
# expected_error =
|
|
||||||
# type_error_type_annotation_mismatch(actual_type, expected_annotated_type)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_error)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "(the (union) nil) - empty union resolves to nothing, nil is not nothing" do
|
|
||||||
# # In Tilly, nil is a literal atom, not the type :nothing.
|
|
||||||
# # :nothing is the bottom type, no values inhabit it.
|
|
||||||
# source = "(the (union) nil)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # type of 'nil' symbol
|
|
||||||
# actual_type = %{type_kind: :literal, value: nil}
|
|
||||||
# expected_annotated_type = type_primitive_nothing()
|
|
||||||
#
|
|
||||||
# expected_error =
|
|
||||||
# type_error_type_annotation_mismatch(actual_type, expected_annotated_type)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_error)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "nested unions: (the (union atom (union integer string)) :foo)" do
|
|
||||||
# source = "(the (union atom (union integer string)) :foo)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # (union integer string)
|
|
||||||
# inner_union = type_union_raw([type_primitive_integer(), type_primitive_string()])
|
|
||||||
# # (union atom inner_union)
|
|
||||||
# expected_annotation_type = type_union_raw([type_primitive_atom(), inner_union])
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "nested unions: (the (union (union integer string) atom) 1)" do
|
|
||||||
# source = "(the (union (union integer string) atom) 1)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# inner_union = type_union_raw([type_primitive_integer(), type_primitive_string()])
|
|
||||||
# expected_annotation_type = type_union_raw([inner_union, type_primitive_atom()])
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "unions with complex types: (the (union (list integer) (map atom string)) [1 2])" do
|
|
||||||
# source = "(the (union (list integer) (map atom string)) [1 2])"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # Length is dynamic for annotation
|
|
||||||
# list_int_type = type_list_raw(type_primitive_integer())
|
|
||||||
#
|
|
||||||
# map_atom_string_type =
|
|
||||||
# type_map_raw(
|
|
||||||
# %{},
|
|
||||||
# %{key_type: type_primitive_atom(), value_type: type_primitive_string()}
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# expected_annotation_type = type_union_raw([list_int_type, map_atom_string_type])
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
#
|
|
||||||
# # Check inner list type
|
|
||||||
# # [1,2] -> type is (list (union (literal 1) (literal 2)) 2)
|
|
||||||
# # This is a subtype of (list integer)
|
|
||||||
# inner_list_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, the_node.id)
|
|
||||||
#
|
|
||||||
# # The type of [1,2] is (list (union (literal 1) (literal 2)) length: 2)
|
|
||||||
# # which simplifies to (list (literal 1 | literal 2) length: 2)
|
|
||||||
# # This is a subtype of (list integer).
|
|
||||||
# # The assertion on `the_node` already confirms the subtyping worked.
|
|
||||||
# # We can assert the specific type of the inner list if needed for clarity.
|
|
||||||
# expected_inner_list_element_type =
|
|
||||||
# type_union_raw([type_literal_int(1), type_literal_int(2)])
|
|
||||||
#
|
|
||||||
# expected_inner_list_type = type_list_raw(expected_inner_list_element_type, 2)
|
|
||||||
# TestHelpers.assert_node_typed_as(inner_list_node, typed_nodes_map, expected_inner_list_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "unions with complex types: (the (union (list integer) (map atom string)) m{:a \"b\"})" do
|
|
||||||
# source = "(the (union (list integer) (map atom string)) m{:a 'b'})"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# list_int_type = type_list_raw(type_primitive_integer())
|
|
||||||
#
|
|
||||||
# map_atom_string_type =
|
|
||||||
# type_map_raw(
|
|
||||||
# %{},
|
|
||||||
# %{key_type: type_primitive_atom(), value_type: type_primitive_string()}
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# expected_annotation_type = type_union_raw([list_int_type, map_atom_string_type])
|
|
||||||
#
|
|
||||||
# # The actual type of m{:a 'b'}
|
|
||||||
# actual_map_type =
|
|
||||||
# type_map_raw(
|
|
||||||
# %{a: %{value_type: type_literal_string("b"), optional: false}},
|
|
||||||
# %{key_type: type_primitive_any(), value_type: type_primitive_any()}
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# expected_error =
|
|
||||||
# type_error_type_annotation_mismatch(actual_map_type, expected_annotation_type)
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_error)
|
|
||||||
#
|
|
||||||
# # We can still check the inner map node's type independently if desired,
|
|
||||||
# # as its typing is correct, even if the 'the' expression results in an error.
|
|
||||||
# inner_map_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, the_node.id)
|
|
||||||
# TestHelpers.assert_node_typed_as(inner_map_node, typed_nodes_map, actual_map_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "union type specifier with unknown member type defaults to any for that member" do
|
|
||||||
# source = "(the (union integer foobar) 42)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # 'foobar' resolves to 'any'. So (union integer any) is effectively 'any'.
|
|
||||||
# # However, the structure of the union should be (union integer any) before simplification.
|
|
||||||
# # The subtyping check `literal 42 <: (union integer any)` will be true.
|
|
||||||
# # The type of the 'the' expression will be `(union integer any)`.
|
|
||||||
# expected_annotation_type = type_union_raw([type_primitive_integer(), type_primitive_any()])
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "union type specifier with all unknown member types" do
|
|
||||||
# source = "(the (union foo bar) 42)"
|
|
||||||
# {the_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# # (union foo bar) -> (union any any) -> any (after potential simplification by interner/subtype)
|
|
||||||
# # For now, let's expect the raw structure (union any any)
|
|
||||||
# expected_annotation_type = type_union_raw([type_primitive_any(), type_primitive_any()])
|
|
||||||
# TestHelpers.assert_node_typed_as(the_node, typed_nodes_map, expected_annotation_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "malformed union: (union integer string) - not in 'the'" do
|
|
||||||
# # This is not a (the ...) expression, so it's just an s-expression.
|
|
||||||
# # Its type will be 'any' by default for unknown s-expression operators.
|
|
||||||
# source = "(union integer string)"
|
|
||||||
# {sexpr_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# TestHelpers.assert_node_typed_as(sexpr_node, typed_nodes_map, type_primitive_any())
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
@ -1,246 +0,0 @@
|
|||||||
# defmodule Til.TypingSimpleTest do
|
|
||||||
# use ExUnit.Case, async: true
|
|
||||||
# # lets always alias AstUtils, Typer and Parser for convenience
|
|
||||||
# alias Til.AstUtils
|
|
||||||
# alias Til.Parser
|
|
||||||
# alias Til.Typer
|
|
||||||
# # Added alias
|
|
||||||
# alias Til.TestHelpers
|
|
||||||
#
|
|
||||||
# alias MapSet, as: Set
|
|
||||||
#
|
|
||||||
# describe "simple type inference tests" do
|
|
||||||
# test "types a literal integer" do
|
|
||||||
# source = "42"
|
|
||||||
# {integer_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# assert integer_node.ast_node_type == :literal_integer
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(integer_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: 42
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types a literal string" do
|
|
||||||
# source = "'hello'"
|
|
||||||
# {string_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
#
|
|
||||||
# assert string_node.ast_node_type == :literal_string
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(string_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: "hello"
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # test "types a simple s-expression (e.g. function call with literals)" do
|
|
||||||
# # source = "(add 1 2)"
|
|
||||||
# # {:ok, nodes_map} = Parser.parse(source)
|
|
||||||
# #
|
|
||||||
# # flunk("Test not implemented: #{inspect(nodes_map)}")
|
|
||||||
# # end
|
|
||||||
#
|
|
||||||
# test "types an s-expression with a symbol lookup in an environment" do
|
|
||||||
# source = """
|
|
||||||
# (= x 5)
|
|
||||||
# (= y x)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# # Parse once
|
|
||||||
# {:ok, parsed_nodes_map} = Parser.parse(source)
|
|
||||||
# # Typecheck once
|
|
||||||
# {:ok, typed_nodes_map} = Typer.type_check(parsed_nodes_map)
|
|
||||||
#
|
|
||||||
# file_node = Enum.find(Map.values(typed_nodes_map), &(&1.ast_node_type == :file))
|
|
||||||
# refute is_nil(file_node)
|
|
||||||
# assert length(file_node.children) == 2, "Expected two top-level s-expressions"
|
|
||||||
#
|
|
||||||
# # --- First assignment: (= x 5) ---
|
|
||||||
# s_expr_1_node = TestHelpers.get_nth_child_node(typed_nodes_map, 0, file_node.id)
|
|
||||||
# assert s_expr_1_node.ast_node_type == :s_expression
|
|
||||||
# assert length(s_expr_1_node.children) == 3
|
|
||||||
#
|
|
||||||
# symbol_x_lhs_node = TestHelpers.get_nth_child_node(typed_nodes_map, 1, s_expr_1_node.id)
|
|
||||||
# assert symbol_x_lhs_node.ast_node_type == :symbol
|
|
||||||
# assert symbol_x_lhs_node.name == "x"
|
|
||||||
#
|
|
||||||
# literal_5_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, s_expr_1_node.id)
|
|
||||||
# assert literal_5_node.ast_node_type == :literal_integer
|
|
||||||
# assert literal_5_node.value == 5
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(literal_5_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: 5
|
|
||||||
# })
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(s_expr_1_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: 5
|
|
||||||
# })
|
|
||||||
#
|
|
||||||
# # --- Second assignment: (= y x) ---
|
|
||||||
# s_expr_2_node = TestHelpers.get_nth_child_node(typed_nodes_map, 1, file_node.id)
|
|
||||||
# assert s_expr_2_node.ast_node_type == :s_expression
|
|
||||||
# assert length(s_expr_2_node.children) == 3
|
|
||||||
#
|
|
||||||
# symbol_y_lhs_node = TestHelpers.get_nth_child_node(typed_nodes_map, 1, s_expr_2_node.id)
|
|
||||||
# assert symbol_y_lhs_node.ast_node_type == :symbol
|
|
||||||
# assert symbol_y_lhs_node.name == "y"
|
|
||||||
#
|
|
||||||
# symbol_x_rhs_node = TestHelpers.get_nth_child_node(typed_nodes_map, 2, s_expr_2_node.id)
|
|
||||||
# assert symbol_x_rhs_node.ast_node_type == :symbol
|
|
||||||
# assert symbol_x_rhs_node.name == "x"
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(symbol_x_rhs_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: 5
|
|
||||||
# })
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(s_expr_2_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :literal,
|
|
||||||
# value: 5
|
|
||||||
# })
|
|
||||||
#
|
|
||||||
# # Assert that 'y' in the environment (and thus its node if we were to look it up again) would be integer.
|
|
||||||
# # The symbol_y_lhs_node itself might be typed as :any before the assignment's effect is fully "realized" on it.
|
|
||||||
# # The critical part is that the environment passed forward contains y: :til_type_integer.
|
|
||||||
# # The final environment is inspected in Typer.type_check, which can be manually verified for now.
|
|
||||||
# # For this test, checking the type of the assignment expression (s_expr_2_node) and the RHS (symbol_x_rhs_node) is sufficient.
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression with same type in both branches, ambiguous condition" do
|
|
||||||
# source = """
|
|
||||||
# (= cond_var some_ambiguous_symbol)
|
|
||||||
# (if cond_var 1 2)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
#
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
#
|
|
||||||
# expected_type_1 = %{type_kind: :literal, value: 1}
|
|
||||||
# expected_type_2 = %{type_kind: :literal, value: 2}
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: Set.new([expected_type_1, expected_type_2])
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression with different types, ambiguous condition, resulting in a union" do
|
|
||||||
# source = """
|
|
||||||
# (= cond_var some_ambiguous_symbol)
|
|
||||||
# (if cond_var 1 'hello')
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
#
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
#
|
|
||||||
# expected_int_type = %{type_kind: :literal, value: 1}
|
|
||||||
# expected_str_type = %{type_kind: :literal, value: "hello"}
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: Set.new([expected_int_type, expected_str_type])
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression with a missing else branch, ambiguous condition (union with nil type)" do
|
|
||||||
# source = """
|
|
||||||
# (= cond_var some_ambiguous_symbol)
|
|
||||||
# (if cond_var 1)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
#
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
#
|
|
||||||
# expected_int_type = %{type_kind: :literal, value: 1}
|
|
||||||
# expected_nil_type = %{type_kind: :literal, value: nil}
|
|
||||||
#
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, %{
|
|
||||||
# type_kind: :union,
|
|
||||||
# types: Set.new([expected_int_type, expected_nil_type])
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression where then branch is nil, missing else, condition true (results in nil type)" do
|
|
||||||
# source = """
|
|
||||||
# (= x nil)
|
|
||||||
# (if true x)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
# expected_nil_type = %{type_kind: :literal, value: nil}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_nil_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression where the condition is true, then branch is 1, missing else (results in `1` type)" do
|
|
||||||
# source = """
|
|
||||||
# (= x 1)
|
|
||||||
# (if true x)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
# expected_type_of_1 = %{type_kind: :literal, value: 1}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_type_of_1)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "types an if expression where then and else are nil, condition true (results in nil type)" do
|
|
||||||
# source = """
|
|
||||||
# (= x nil)
|
|
||||||
# (if true x x)
|
|
||||||
# """
|
|
||||||
#
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
# expected_nil_type = %{type_kind: :literal, value: nil}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_nil_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "if expression with statically false condition, missing else (results in nil type)" do
|
|
||||||
# source = "(if false 123)"
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
# expected_nil_type = %{type_kind: :literal, value: nil}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_nil_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "if expression with statically false condition, with else branch (results in else branch type)" do
|
|
||||||
# source = "(if false 123 'else_val')"
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
# expected_else_type = %{type_kind: :literal, value: "else_val"}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_else_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "if expression with truthy non-boolean literal condition type (integer)" do
|
|
||||||
# source = """
|
|
||||||
# (if 123 'then' 'else')
|
|
||||||
# """
|
|
||||||
# # Since 123 is truthy, the type of the if expression should be the type of 'then'.
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_first_node(source)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
#
|
|
||||||
# expected_then_type = %{type_kind: :literal, value: "then"}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_then_type)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# test "if expression with truthy non-boolean symbol condition type (typed as integer)" do
|
|
||||||
# source = """
|
|
||||||
# (= my_int_cond 123)
|
|
||||||
# (if my_int_cond 'then' 'else')
|
|
||||||
# """
|
|
||||||
# # my_int_cond is 123 (truthy), so the type of the if expression should be the type of 'then'.
|
|
||||||
# {if_node, typed_nodes_map} = TestHelpers.typecheck_and_get_nth_node(source, 1)
|
|
||||||
# assert if_node.ast_node_type == :s_expression
|
|
||||||
#
|
|
||||||
# expected_then_type = %{type_kind: :literal, value: "then"}
|
|
||||||
# TestHelpers.assert_node_typed_as(if_node, typed_nodes_map, expected_then_type)
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
# end
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
defmodule TilTest do
|
|
||||||
use ExUnit.Case
|
|
||||||
doctest Til
|
|
||||||
|
|
||||||
test "greets the world" do
|
|
||||||
assert Til.hello() == :world
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,77 +0,0 @@
|
|||||||
defmodule Tilly.BDD.AtomBoolOpsTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD.AtomBoolOps
|
|
||||||
|
|
||||||
describe "compare_elements/2" do
|
|
||||||
test "correctly compares atoms" do
|
|
||||||
assert AtomBoolOps.compare_elements(:apple, :banana) == :lt
|
|
||||||
assert AtomBoolOps.compare_elements(:banana, :apple) == :gt
|
|
||||||
assert AtomBoolOps.compare_elements(:cherry, :cherry) == :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "equal_element?/2" do
|
|
||||||
test "correctly checks atom equality" do
|
|
||||||
assert AtomBoolOps.equal_element?(:apple, :apple) == true
|
|
||||||
assert AtomBoolOps.equal_element?(:apple, :banana) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "hash_element/1" do
|
|
||||||
test "hashes atoms consistently" do
|
|
||||||
assert is_integer(AtomBoolOps.hash_element(:foo))
|
|
||||||
assert AtomBoolOps.hash_element(:foo) == AtomBoolOps.hash_element(:foo)
|
|
||||||
assert AtomBoolOps.hash_element(:foo) != AtomBoolOps.hash_element(:bar)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "leaf operations" do
|
|
||||||
test "empty_leaf/0 returns false" do
|
|
||||||
assert AtomBoolOps.empty_leaf() == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "any_leaf/0 returns true" do
|
|
||||||
assert AtomBoolOps.any_leaf() == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_empty_leaf?/1" do
|
|
||||||
assert AtomBoolOps.is_empty_leaf?(false) == true
|
|
||||||
assert AtomBoolOps.is_empty_leaf?(true) == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "union_leaves/3" do
|
|
||||||
assert AtomBoolOps.union_leaves(%{}, false, false) == false
|
|
||||||
assert AtomBoolOps.union_leaves(%{}, true, false) == true
|
|
||||||
assert AtomBoolOps.union_leaves(%{}, false, true) == true
|
|
||||||
assert AtomBoolOps.union_leaves(%{}, true, true) == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "intersection_leaves/3" do
|
|
||||||
assert AtomBoolOps.intersection_leaves(%{}, false, false) == false
|
|
||||||
assert AtomBoolOps.intersection_leaves(%{}, true, false) == false
|
|
||||||
assert AtomBoolOps.intersection_leaves(%{}, false, true) == false
|
|
||||||
assert AtomBoolOps.intersection_leaves(%{}, true, true) == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "negation_leaf/2" do
|
|
||||||
assert AtomBoolOps.negation_leaf(%{}, false) == true
|
|
||||||
assert AtomBoolOps.negation_leaf(%{}, true) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "test_leaf_value/1" do
|
|
||||||
test "returns :empty for false" do
|
|
||||||
assert AtomBoolOps.test_leaf_value(false) == :empty
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns :full for true" do
|
|
||||||
assert AtomBoolOps.test_leaf_value(true) == :full
|
|
||||||
end
|
|
||||||
|
|
||||||
# Conceptual test if atoms had other leaf values
|
|
||||||
# test "returns :other for other values" do
|
|
||||||
# assert AtomBoolOps.test_leaf_value(:some_other_leaf_marker) == :other
|
|
||||||
# end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,67 +0,0 @@
|
|||||||
defmodule Tilly.BDD.IntegerBoolOpsTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD.IntegerBoolOps
|
|
||||||
|
|
||||||
describe "compare_elements/2" do
|
|
||||||
test "correctly compares integers" do
|
|
||||||
assert IntegerBoolOps.compare_elements(1, 2) == :lt
|
|
||||||
assert IntegerBoolOps.compare_elements(2, 1) == :gt
|
|
||||||
assert IntegerBoolOps.compare_elements(1, 1) == :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "equal_element?/2" do
|
|
||||||
test "correctly checks equality of integers" do
|
|
||||||
assert IntegerBoolOps.equal_element?(1, 1) == true
|
|
||||||
assert IntegerBoolOps.equal_element?(1, 2) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "hash_element/1" do
|
|
||||||
test "returns the integer itself as hash" do
|
|
||||||
assert IntegerBoolOps.hash_element(123) == 123
|
|
||||||
assert IntegerBoolOps.hash_element(-5) == -5
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "leaf operations" do
|
|
||||||
test "empty_leaf/0 returns false" do
|
|
||||||
assert IntegerBoolOps.empty_leaf() == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "any_leaf/0 returns true" do
|
|
||||||
assert IntegerBoolOps.any_leaf() == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_empty_leaf?/1" do
|
|
||||||
assert IntegerBoolOps.is_empty_leaf?(false) == true
|
|
||||||
assert IntegerBoolOps.is_empty_leaf?(true) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "union_leaves/3" do
|
|
||||||
test "computes boolean OR" do
|
|
||||||
assert IntegerBoolOps.union_leaves(%{}, true, true) == true
|
|
||||||
assert IntegerBoolOps.union_leaves(%{}, true, false) == true
|
|
||||||
assert IntegerBoolOps.union_leaves(%{}, false, true) == true
|
|
||||||
assert IntegerBoolOps.union_leaves(%{}, false, false) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "intersection_leaves/3" do
|
|
||||||
test "computes boolean AND" do
|
|
||||||
assert IntegerBoolOps.intersection_leaves(%{}, true, true) == true
|
|
||||||
assert IntegerBoolOps.intersection_leaves(%{}, true, false) == false
|
|
||||||
assert IntegerBoolOps.intersection_leaves(%{}, false, true) == false
|
|
||||||
assert IntegerBoolOps.intersection_leaves(%{}, false, false) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "negation_leaf/2" do
|
|
||||||
test "computes boolean NOT" do
|
|
||||||
assert IntegerBoolOps.negation_leaf(%{}, true) == false
|
|
||||||
assert IntegerBoolOps.negation_leaf(%{}, false) == true
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,123 +0,0 @@
|
|||||||
defmodule Tilly.BDD.NodeTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD.Node
|
|
||||||
|
|
||||||
describe "Smart Constructors" do
|
|
||||||
test "mk_true/0 returns true" do
|
|
||||||
assert Node.mk_true() == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "mk_false/0 returns false" do
|
|
||||||
assert Node.mk_false() == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "mk_leaf/1 creates a leaf node" do
|
|
||||||
assert Node.mk_leaf(:some_value) == {:leaf, :some_value}
|
|
||||||
assert Node.mk_leaf(123) == {:leaf, 123}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "mk_split/4 creates a split node" do
|
|
||||||
assert Node.mk_split(:el, :p_id, :i_id, :n_id) == {:split, :el, :p_id, :i_id, :n_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Predicates" do
|
|
||||||
setup do
|
|
||||||
%{
|
|
||||||
true_node: Node.mk_true(),
|
|
||||||
false_node: Node.mk_false(),
|
|
||||||
leaf_node: Node.mk_leaf("data"),
|
|
||||||
split_node: Node.mk_split(1, 2, 3, 4)
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_true?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
|
|
||||||
assert Node.is_true?(t) == true
|
|
||||||
assert Node.is_true?(f) == false
|
|
||||||
assert Node.is_true?(l) == false
|
|
||||||
assert Node.is_true?(s) == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_false?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
|
|
||||||
assert Node.is_false?(f) == true
|
|
||||||
assert Node.is_false?(t) == false
|
|
||||||
assert Node.is_false?(l) == false
|
|
||||||
assert Node.is_false?(s) == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_leaf?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
|
|
||||||
assert Node.is_leaf?(l) == true
|
|
||||||
assert Node.is_leaf?(t) == false
|
|
||||||
assert Node.is_leaf?(f) == false
|
|
||||||
assert Node.is_leaf?(s) == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_split?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
|
|
||||||
assert Node.is_split?(s) == true
|
|
||||||
assert Node.is_split?(t) == false
|
|
||||||
assert Node.is_split?(f) == false
|
|
||||||
assert Node.is_split?(l) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "Accessors" do
|
|
||||||
setup do
|
|
||||||
%{
|
|
||||||
leaf_node: Node.mk_leaf("leaf_data"),
|
|
||||||
split_node: Node.mk_split(:elem_id, :pos_child, :ign_child, :neg_child)
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "value/1 for leaf node", %{leaf_node: l} do
|
|
||||||
assert Node.value(l) == "leaf_data"
|
|
||||||
end
|
|
||||||
|
|
||||||
test "value/1 raises for non-leaf node" do
|
|
||||||
assert_raise ArgumentError, ~r/Not a leaf node/, fn -> Node.value(Node.mk_true()) end
|
|
||||||
|
|
||||||
assert_raise ArgumentError, ~r/Not a leaf node/, fn ->
|
|
||||||
Node.value(Node.mk_split(1, 2, 3, 4))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
test "element/1 for split node", %{split_node: s} do
|
|
||||||
assert Node.element(s) == :elem_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "element/1 raises for non-split node" do
|
|
||||||
assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_true()) end
|
|
||||||
assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_leaf(1)) end
|
|
||||||
end
|
|
||||||
|
|
||||||
test "positive_child/1 for split node", %{split_node: s} do
|
|
||||||
assert Node.positive_child(s) == :pos_child
|
|
||||||
end
|
|
||||||
|
|
||||||
test "positive_child/1 raises for non-split node" do
|
|
||||||
assert_raise ArgumentError, ~r/Not a split node/, fn ->
|
|
||||||
Node.positive_child(Node.mk_leaf(1))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
test "ignore_child/1 for split node", %{split_node: s} do
|
|
||||||
assert Node.ignore_child(s) == :ign_child
|
|
||||||
end
|
|
||||||
|
|
||||||
test "ignore_child/1 raises for non-split node" do
|
|
||||||
assert_raise ArgumentError, ~r/Not a split node/, fn ->
|
|
||||||
Node.ignore_child(Node.mk_leaf(1))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
test "negative_child/1 for split node", %{split_node: s} do
|
|
||||||
assert Node.negative_child(s) == :neg_child
|
|
||||||
end
|
|
||||||
|
|
||||||
test "negative_child/1 raises for non-split node" do
|
|
||||||
assert_raise ArgumentError, ~r/Not a split node/, fn ->
|
|
||||||
Node.negative_child(Node.mk_leaf(1))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,191 +0,0 @@
|
|||||||
defmodule Tilly.BDD.OpsTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.BDD.Node
|
|
||||||
alias Tilly.BDD.Ops
|
|
||||||
alias Tilly.BDD.IntegerBoolOps # Using a concrete ops_module for testing
|
|
||||||
|
|
||||||
setup do
|
|
||||||
typing_ctx = BDD.init_bdd_store(%{})
|
|
||||||
# Pre-intern some common elements for tests if needed, e.g., integers
|
|
||||||
# For now, rely on ops to intern elements as they are used.
|
|
||||||
%{initial_ctx: typing_ctx}
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "leaf/3" do
|
|
||||||
test "interning an empty leaf value returns predefined false_id", %{initial_ctx: ctx} do
|
|
||||||
{new_ctx, node_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
assert node_id == BDD.false_node_id()
|
|
||||||
assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache # Cache not used for this path
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning a full leaf value returns predefined true_id", %{initial_ctx: ctx} do
|
|
||||||
{new_ctx, node_id} = Ops.leaf(ctx, true, IntegerBoolOps)
|
|
||||||
assert node_id == BDD.true_node_id()
|
|
||||||
assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache
|
|
||||||
end
|
|
||||||
|
|
||||||
@tag :skip
|
|
||||||
test "interning a new 'other' leaf value returns a new ID", %{initial_ctx: _ctx} do
|
|
||||||
# Assuming IntegerBoolOps.test_leaf_value/1 would return :other for non-booleans
|
|
||||||
# For this test, we'd need an ops_module where e.g. an integer is an :other leaf.
|
|
||||||
# Let's simulate with a mock or by extending IntegerBoolOps if it were not read-only.
|
|
||||||
# For now, this test is conceptual for boolean leaves.
|
|
||||||
# If IntegerBoolOps was extended:
|
|
||||||
# defmodule MockIntegerOps do
|
|
||||||
# defdelegate compare_elements(e1, e2), to: IntegerBoolOps
|
|
||||||
# defdelegate equal_element?(e1, e2), to: IntegerBoolOps
|
|
||||||
# # ... other delegates
|
|
||||||
# def test_leaf_value(10), do: :other # Treat 10 as a specific leaf
|
|
||||||
# def test_leaf_value(true), do: :full
|
|
||||||
# def test_leaf_value(false), do: :empty
|
|
||||||
# end
|
|
||||||
# {ctx_after_intern, node_id} = Ops.leaf(ctx, 10, MockIntegerOps)
|
|
||||||
# assert node_id != BDD.true_node_id() and node_id != BDD.false_node_id()
|
|
||||||
# assert BDD.get_node_data(ctx_after_intern, node_id).structure == Node.mk_leaf(10)
|
|
||||||
# Placeholder for more complex leaf types. Test is skipped.
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "split/6 basic simplifications" do
|
|
||||||
test "if i_id is true, returns true_id", %{initial_ctx: ctx} do
|
|
||||||
{_p_ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
|
|
||||||
{_n_ctx, n_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
|
|
||||||
{new_ctx, result_id} = Ops.split(ctx, 10, p_id, true_id, n_id, IntegerBoolOps)
|
|
||||||
assert result_id == true_id
|
|
||||||
assert new_ctx == ctx # No new nodes or cache entries expected for this rule
|
|
||||||
end
|
|
||||||
|
|
||||||
test "if p_id == n_id and p_id == i_id, returns p_id", %{initial_ctx: ctx} do
|
|
||||||
{ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps) # some leaf
|
|
||||||
i_id = p_id
|
|
||||||
n_id = p_id
|
|
||||||
|
|
||||||
{_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
|
|
||||||
assert result_id == p_id
|
|
||||||
# Cache might be touched if union_bdds was called, but this rule is direct.
|
|
||||||
# For p_id == i_id, it's direct.
|
|
||||||
end
|
|
||||||
|
|
||||||
test "if p_id == n_id and p_id != i_id, returns union(p_id, i_id)", %{initial_ctx: ctx} do
|
|
||||||
{ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps)
|
|
||||||
{ctx, i_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(true), IntegerBoolOps) # different leaf
|
|
||||||
n_id = p_id
|
|
||||||
|
|
||||||
# Expected union of p_id (false_leaf) and i_id (true_leaf) is true_id
|
|
||||||
# This relies on union_bdds working.
|
|
||||||
{_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
|
|
||||||
expected_union_id = BDD.true_node_id() # Union of false_leaf and true_leaf
|
|
||||||
assert result_id == expected_union_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interns a new split node if no simplification rule applies", %{initial_ctx: ctx} do
|
|
||||||
{ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
|
|
||||||
{ctx, i_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
|
|
||||||
{ctx, n_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id (different from p_id)
|
|
||||||
|
|
||||||
element = 20
|
|
||||||
{new_ctx, split_node_id} = Ops.split(ctx, element, p_id, i_id, n_id, IntegerBoolOps)
|
|
||||||
|
|
||||||
assert split_node_id != p_id and split_node_id != i_id and split_node_id != n_id
|
|
||||||
assert split_node_id != BDD.true_node_id() and split_node_id != BDD.false_node_id()
|
|
||||||
|
|
||||||
node_data = BDD.get_node_data(new_ctx, split_node_id)
|
|
||||||
assert node_data.structure == Node.mk_split(element, p_id, i_id, n_id)
|
|
||||||
assert node_data.ops_module == IntegerBoolOps
|
|
||||||
assert new_ctx.bdd_store.next_node_id > ctx.bdd_store.next_node_id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "union_bdds/3" do
|
|
||||||
test "A U A = A", %{initial_ctx: ctx} do
|
|
||||||
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
|
|
||||||
{new_ctx, result_id} = Ops.union_bdds(ctx, a_id, a_id)
|
|
||||||
assert result_id == a_id
|
|
||||||
assert Map.has_key?(new_ctx.bdd_store.ops_cache, {:union, a_id, a_id})
|
|
||||||
end
|
|
||||||
|
|
||||||
test "A U True = True", %{initial_ctx: ctx} do
|
|
||||||
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
{_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, true_id)
|
|
||||||
assert result_id == true_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "A U False = A", %{initial_ctx: ctx} do
|
|
||||||
{ctx, a_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
{_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, false_id)
|
|
||||||
assert result_id == a_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "union of two distinct leaves", %{initial_ctx: ctx} do
|
|
||||||
# leaf(false) U leaf(true) = leaf(true OR false) = leaf(true) -> true_node_id
|
|
||||||
{ctx, leaf_false_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, leaf_true_id} = Ops.leaf(ctx, true, IntegerBoolOps) # This is BDD.true_node_id()
|
|
||||||
|
|
||||||
{_new_ctx, result_id} = Ops.union_bdds(ctx, leaf_false_id, leaf_true_id)
|
|
||||||
assert result_id == BDD.true_node_id()
|
|
||||||
end
|
|
||||||
|
|
||||||
test "union of two simple split nodes with same element", %{initial_ctx: ctx} do
|
|
||||||
# BDD1: split(10, True, False, False)
|
|
||||||
# BDD2: split(10, False, True, False)
|
|
||||||
# Union: split(10, True U False, False U True, False U False)
|
|
||||||
# = split(10, True, True, False)
|
|
||||||
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
|
|
||||||
{ctx, bdd1_id} = Ops.split(ctx, 10, true_id, false_id, false_id, IntegerBoolOps)
|
|
||||||
{ctx, bdd2_id} = Ops.split(ctx, 10, false_id, true_id, false_id, IntegerBoolOps)
|
|
||||||
|
|
||||||
{final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
|
|
||||||
|
|
||||||
# Expected structure
|
|
||||||
{_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, true_id, true_id, false_id, IntegerBoolOps)
|
|
||||||
assert union_id == expected_bdd_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "union of two simple split nodes with different elements (x1 < x2)", %{initial_ctx: ctx} do
|
|
||||||
# BDD1: split(10, True, False, False)
|
|
||||||
# BDD2: split(20, False, True, False)
|
|
||||||
# Union (x1 < x2): split(10, p1, i1 U BDD2, n1)
|
|
||||||
# = split(10, True, False U BDD2, False)
|
|
||||||
# = split(10, True, BDD2, False)
|
|
||||||
|
|
||||||
{ctx, bdd1_p1_id} = Ops.leaf(ctx, true, IntegerBoolOps)
|
|
||||||
{ctx, bdd1_i1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, bdd1_n1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, bdd1_id} = Ops.split(ctx, 10, bdd1_p1_id, bdd1_i1_id, bdd1_n1_id, IntegerBoolOps)
|
|
||||||
|
|
||||||
{ctx, bdd2_p2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, bdd2_i2_id} = Ops.leaf(ctx, true, IntegerBoolOps)
|
|
||||||
{ctx, bdd2_n2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, bdd2_id} = Ops.split(ctx, 20, bdd2_p2_id, bdd2_i2_id, bdd2_n2_id, IntegerBoolOps)
|
|
||||||
|
|
||||||
{final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
|
|
||||||
|
|
||||||
# Expected structure: split(10, True, BDD2, False)
|
|
||||||
{_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, bdd1_p1_id, bdd2_id, bdd1_n1_id, IntegerBoolOps)
|
|
||||||
assert union_id == expected_bdd_id
|
|
||||||
end
|
|
||||||
|
|
||||||
test "uses cache for repeated union operations", %{initial_ctx: ctx} do
|
|
||||||
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
|
|
||||||
{ctx, b_id} = Ops.leaf(ctx, true, IntegerBoolOps)
|
|
||||||
|
|
||||||
{ctx_after_first_union, _result1_id} = Ops.union_bdds(ctx, a_id, b_id)
|
|
||||||
cache_after_first = ctx_after_first_union.bdd_store.ops_cache
|
|
||||||
|
|
||||||
{ctx_after_second_union, _result2_id} = Ops.union_bdds(ctx_after_first_union, a_id, b_id)
|
|
||||||
# The BDD store itself (nodes, next_id) should not change on a cache hit.
|
|
||||||
# The ops_cache map reference will be the same if the result was cached.
|
|
||||||
assert ctx_after_second_union.bdd_store.ops_cache == cache_after_first
|
|
||||||
assert ctx_after_second_union.bdd_store.next_node_id == ctx_after_first_union.bdd_store.next_node_id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,72 +0,0 @@
|
|||||||
defmodule Tilly.BDD.StringBoolOpsTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD.StringBoolOps
|
|
||||||
|
|
||||||
describe "compare_elements/2" do
|
|
||||||
test "correctly compares strings" do
|
|
||||||
assert StringBoolOps.compare_elements("apple", "banana") == :lt
|
|
||||||
assert StringBoolOps.compare_elements("banana", "apple") == :gt
|
|
||||||
assert StringBoolOps.compare_elements("cherry", "cherry") == :eq
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "equal_element?/2" do
|
|
||||||
test "correctly checks string equality" do
|
|
||||||
assert StringBoolOps.equal_element?("apple", "apple") == true
|
|
||||||
assert StringBoolOps.equal_element?("apple", "banana") == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "hash_element/1" do
|
|
||||||
test "hashes strings consistently" do
|
|
||||||
assert is_integer(StringBoolOps.hash_element("foo"))
|
|
||||||
assert StringBoolOps.hash_element("foo") == StringBoolOps.hash_element("foo")
|
|
||||||
assert StringBoolOps.hash_element("foo") != StringBoolOps.hash_element("bar")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "leaf operations" do
|
|
||||||
test "empty_leaf/0 returns false" do
|
|
||||||
assert StringBoolOps.empty_leaf() == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "any_leaf/0 returns true" do
|
|
||||||
assert StringBoolOps.any_leaf() == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_empty_leaf?/1" do
|
|
||||||
assert StringBoolOps.is_empty_leaf?(false) == true
|
|
||||||
assert StringBoolOps.is_empty_leaf?(true) == false
|
|
||||||
end
|
|
||||||
|
|
||||||
test "union_leaves/3" do
|
|
||||||
assert StringBoolOps.union_leaves(%{}, false, false) == false
|
|
||||||
assert StringBoolOps.union_leaves(%{}, true, false) == true
|
|
||||||
assert StringBoolOps.union_leaves(%{}, false, true) == true
|
|
||||||
assert StringBoolOps.union_leaves(%{}, true, true) == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "intersection_leaves/3" do
|
|
||||||
assert StringBoolOps.intersection_leaves(%{}, false, false) == false
|
|
||||||
assert StringBoolOps.intersection_leaves(%{}, true, false) == false
|
|
||||||
assert StringBoolOps.intersection_leaves(%{}, false, true) == false
|
|
||||||
assert StringBoolOps.intersection_leaves(%{}, true, true) == true
|
|
||||||
end
|
|
||||||
|
|
||||||
test "negation_leaf/2" do
|
|
||||||
assert StringBoolOps.negation_leaf(%{}, false) == true
|
|
||||||
assert StringBoolOps.negation_leaf(%{}, true) == false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "test_leaf_value/1" do
|
|
||||||
test "returns :empty for false" do
|
|
||||||
assert StringBoolOps.test_leaf_value(false) == :empty
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns :full for true" do
|
|
||||||
assert StringBoolOps.test_leaf_value(true) == :full
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,163 +0,0 @@
|
|||||||
defmodule Tilly.BDDTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD.Node
|
|
||||||
|
|
||||||
describe "init_bdd_store/1" do
|
|
||||||
test "initializes bdd_store in typing_ctx with predefined false and true nodes" do
|
|
||||||
typing_ctx = %{}
|
|
||||||
new_ctx = Tilly.BDD.init_bdd_store(typing_ctx)
|
|
||||||
|
|
||||||
assert %{bdd_store: bdd_store} = new_ctx
|
|
||||||
assert is_map(bdd_store.nodes_by_structure)
|
|
||||||
assert is_map(bdd_store.structures_by_id)
|
|
||||||
assert bdd_store.next_node_id == 2 # 0 for false, 1 for true
|
|
||||||
assert bdd_store.ops_cache == %{}
|
|
||||||
|
|
||||||
# Check false node
|
|
||||||
false_id = Tilly.BDD.false_node_id()
|
|
||||||
false_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
assert bdd_store.nodes_by_structure[{Node.mk_false(), false_ops_module}] == false_id
|
|
||||||
assert bdd_store.structures_by_id[false_id] == %{structure: Node.mk_false(), ops_module: false_ops_module}
|
|
||||||
|
|
||||||
# Check true node
|
|
||||||
true_id = Tilly.BDD.true_node_id()
|
|
||||||
true_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
assert bdd_store.nodes_by_structure[{Node.mk_true(), true_ops_module}] == true_id
|
|
||||||
assert bdd_store.structures_by_id[true_id] == %{structure: Node.mk_true(), ops_module: true_ops_module}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "get_or_intern_node/3" do
|
|
||||||
setup do
|
|
||||||
typing_ctx = Tilly.BDD.init_bdd_store(%{})
|
|
||||||
%{initial_ctx: typing_ctx}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning Node.mk_false() returns predefined false_id and doesn't change store", %{initial_ctx: ctx} do
|
|
||||||
false_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
{new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_false(), false_ops_module)
|
|
||||||
assert node_id == Tilly.BDD.false_node_id()
|
|
||||||
assert new_ctx.bdd_store == ctx.bdd_store
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning Node.mk_true() returns predefined true_id and doesn't change store", %{initial_ctx: ctx} do
|
|
||||||
true_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
{new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_true(), true_ops_module)
|
|
||||||
assert node_id == Tilly.BDD.true_node_id()
|
|
||||||
assert new_ctx.bdd_store == ctx.bdd_store
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning a new leaf node returns a new ID and updates the store", %{initial_ctx: ctx} do
|
|
||||||
leaf_structure = Node.mk_leaf("test_leaf")
|
|
||||||
ops_mod = :my_ops
|
|
||||||
|
|
||||||
{ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
|
|
||||||
|
|
||||||
assert node_id == 2 # Initial next_node_id
|
|
||||||
assert ctx_after_intern.bdd_store.next_node_id == 3
|
|
||||||
assert ctx_after_intern.bdd_store.nodes_by_structure[{leaf_structure, ops_mod}] == node_id
|
|
||||||
assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: leaf_structure, ops_module: ops_mod}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning the same leaf node again returns the same ID and doesn't change store", %{initial_ctx: ctx} do
|
|
||||||
leaf_structure = Node.mk_leaf("test_leaf")
|
|
||||||
ops_mod = :my_ops
|
|
||||||
|
|
||||||
{ctx_after_first_intern, first_node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
|
|
||||||
{ctx_after_second_intern, second_node_id} = Tilly.BDD.get_or_intern_node(ctx_after_first_intern, leaf_structure, ops_mod)
|
|
||||||
|
|
||||||
assert first_node_id == second_node_id
|
|
||||||
assert ctx_after_first_intern.bdd_store == ctx_after_second_intern.bdd_store
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning a new split node returns a new ID and updates the store", %{initial_ctx: ctx} do
|
|
||||||
split_structure = Node.mk_split(:el, Tilly.BDD.true_node_id(), Tilly.BDD.false_node_id(), Tilly.BDD.true_node_id())
|
|
||||||
ops_mod = :split_ops
|
|
||||||
|
|
||||||
{ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, split_structure, ops_mod)
|
|
||||||
|
|
||||||
assert node_id == 2 # Initial next_node_id
|
|
||||||
assert ctx_after_intern.bdd_store.next_node_id == 3
|
|
||||||
assert ctx_after_intern.bdd_store.nodes_by_structure[{split_structure, ops_mod}] == node_id
|
|
||||||
assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: split_structure, ops_module: ops_mod}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "interning structurally identical nodes with different ops_modules results in different IDs", %{initial_ctx: ctx} do
|
|
||||||
leaf_structure = Node.mk_leaf("shared_leaf")
|
|
||||||
ops_mod1 = :ops1
|
|
||||||
ops_mod2 = :ops2
|
|
||||||
|
|
||||||
{ctx1, id1} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod1)
|
|
||||||
{_ctx2, id2} = Tilly.BDD.get_or_intern_node(ctx1, leaf_structure, ops_mod2)
|
|
||||||
|
|
||||||
assert id1 != id2
|
|
||||||
assert id1 == 2
|
|
||||||
assert id2 == 3
|
|
||||||
end
|
|
||||||
|
|
||||||
test "raises ArgumentError if bdd_store is not initialized" do
|
|
||||||
assert_raise ArgumentError, ~r/BDD store not initialized/, fn ->
|
|
||||||
Tilly.BDD.get_or_intern_node(%{}, Node.mk_leaf("foo"), :ops)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "get_node_data/2" do
|
|
||||||
setup do
|
|
||||||
ctx = Tilly.BDD.init_bdd_store(%{})
|
|
||||||
leaf_structure = Node.mk_leaf("data")
|
|
||||||
ops_mod = :leaf_ops
|
|
||||||
{new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
|
|
||||||
%{ctx: new_ctx, leaf_structure: leaf_structure, ops_mod: ops_mod, leaf_id: leaf_id_val}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns correct data for false node", %{ctx: ctx} do
|
|
||||||
false_id = Tilly.BDD.false_node_id()
|
|
||||||
false_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
assert Tilly.BDD.get_node_data(ctx, false_id) == %{structure: Node.mk_false(), ops_module: false_ops_module}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns correct data for true node", %{ctx: ctx} do
|
|
||||||
true_id = Tilly.BDD.true_node_id()
|
|
||||||
true_ops_module = Tilly.BDD.universal_ops_module()
|
|
||||||
assert Tilly.BDD.get_node_data(ctx, true_id) == %{structure: Node.mk_true(), ops_module: true_ops_module}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns correct data for a custom interned leaf node", %{ctx: ctx, leaf_structure: ls, ops_mod: om, leaf_id: id} do
|
|
||||||
assert Tilly.BDD.get_node_data(ctx, id) == %{structure: ls, ops_module: om}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns nil for an unknown node ID", %{ctx: ctx} do
|
|
||||||
assert Tilly.BDD.get_node_data(ctx, 999) == nil
|
|
||||||
end
|
|
||||||
|
|
||||||
test "returns nil if bdd_store not in ctx" do
|
|
||||||
assert Tilly.BDD.get_node_data(%{}, 0) == nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "is_false_node?/2 and is_true_node?/2" do
|
|
||||||
setup do
|
|
||||||
ctx = Tilly.BDD.init_bdd_store(%{})
|
|
||||||
leaf_structure = Node.mk_leaf("data")
|
|
||||||
ops_mod = :leaf_ops
|
|
||||||
{new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
|
|
||||||
%{ctx: new_ctx, leaf_id: leaf_id_val}
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_false_node?/2", %{ctx: ctx, leaf_id: id} do
|
|
||||||
assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.false_node_id()) == true
|
|
||||||
assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.true_node_id()) == false
|
|
||||||
assert Tilly.BDD.is_false_node?(ctx, id) == false
|
|
||||||
assert Tilly.BDD.is_false_node?(ctx, 999) == false # Unknown ID
|
|
||||||
end
|
|
||||||
|
|
||||||
test "is_true_node?/2", %{ctx: ctx, leaf_id: id} do
|
|
||||||
assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.true_node_id()) == true
|
|
||||||
assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.false_node_id()) == false
|
|
||||||
assert Tilly.BDD.is_true_node?(ctx, id) == false
|
|
||||||
assert Tilly.BDD.is_true_node?(ctx, 999) == false # Unknown ID
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,162 +0,0 @@
|
|||||||
defmodule Tilly.Type.OpsTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.Type.Store
|
|
||||||
alias Tilly.Type.Ops
|
|
||||||
|
|
||||||
defp init_context do
|
|
||||||
%{}
|
|
||||||
|> BDD.init_bdd_store()
|
|
||||||
|> Store.init_type_store()
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "get_type_nothing/1 and get_type_any/1" do
|
|
||||||
test "get_type_nothing returns an interned Descr ID for the empty type" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx_after_nothing, nothing_id} = Ops.get_type_nothing(ctx)
|
|
||||||
assert Ops.is_empty_type?(ctx_after_nothing, nothing_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "get_type_any returns an interned Descr ID for the universal type" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx_after_any, any_id} = Ops.get_type_any(ctx)
|
|
||||||
refute Ops.is_empty_type?(ctx_after_any, any_id)
|
|
||||||
|
|
||||||
# Further check: any type negated should be nothing type
|
|
||||||
{ctx1, neg_any_id} = Ops.negation_type(ctx_after_any, any_id)
|
|
||||||
{ctx2, nothing_id} = Ops.get_type_nothing(ctx1)
|
|
||||||
assert neg_any_id == nothing_id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "literal type constructors" do
|
|
||||||
test "create_atom_literal_type/2" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx1, atom_foo_id} = Ops.create_atom_literal_type(ctx, :foo)
|
|
||||||
{ctx2, atom_bar_id} = Ops.create_atom_literal_type(ctx1, :bar)
|
|
||||||
{ctx3, atom_foo_again_id} = Ops.create_atom_literal_type(ctx2, :foo)
|
|
||||||
|
|
||||||
refute Ops.is_empty_type?(ctx3, atom_foo_id)
|
|
||||||
refute Ops.is_empty_type?(ctx3, atom_bar_id)
|
|
||||||
assert atom_foo_id != atom_bar_id
|
|
||||||
assert atom_foo_id == atom_foo_again_id
|
|
||||||
|
|
||||||
# Test intersection: (:foo & :bar) should be Nothing
|
|
||||||
{ctx4, intersection_id} = Ops.intersection_types(ctx3, atom_foo_id, atom_bar_id)
|
|
||||||
assert Ops.is_empty_type?(ctx4, intersection_id)
|
|
||||||
|
|
||||||
# Test union: (:foo | :bar) should not be empty
|
|
||||||
{ctx5, union_id} = Ops.union_types(ctx4, atom_foo_id, atom_bar_id)
|
|
||||||
refute Ops.is_empty_type?(ctx5, union_id)
|
|
||||||
|
|
||||||
# Test negation: (not :foo) should not be empty and not be :foo
|
|
||||||
{ctx6, not_foo_id} = Ops.negation_type(ctx5, atom_foo_id)
|
|
||||||
refute Ops.is_empty_type?(ctx6, not_foo_id)
|
|
||||||
{ctx7, intersection_not_foo_and_foo} = Ops.intersection_types(ctx6, atom_foo_id, not_foo_id)
|
|
||||||
assert Ops.is_empty_type?(ctx7, intersection_not_foo_and_foo)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "create_integer_literal_type/2" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx1, int_1_id} = Ops.create_integer_literal_type(ctx, 1)
|
|
||||||
{ctx2, int_2_id} = Ops.create_integer_literal_type(ctx1, 2)
|
|
||||||
|
|
||||||
refute Ops.is_empty_type?(ctx2, int_1_id) # Use ctx2
|
|
||||||
{ctx3, intersection_id} = Ops.intersection_types(ctx2, int_1_id, int_2_id)
|
|
||||||
assert Ops.is_empty_type?(ctx3, intersection_id)
|
|
||||||
end
|
|
||||||
|
|
||||||
test "create_string_literal_type/2" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx1, str_a_id} = Ops.create_string_literal_type(ctx, "a")
|
|
||||||
{ctx2, str_b_id} = Ops.create_string_literal_type(ctx1, "b")
|
|
||||||
|
|
||||||
refute Ops.is_empty_type?(ctx2, str_a_id) # Use ctx2
|
|
||||||
{ctx3, intersection_id} = Ops.intersection_types(ctx2, str_a_id, str_b_id)
|
|
||||||
assert Ops.is_empty_type?(ctx3, intersection_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "primitive type constructors (any_of_kind)" do
|
|
||||||
test "get_primitive_type_any_atom/1" do
|
|
||||||
ctx = init_context()
|
|
||||||
{ctx1, any_atom_id} = Ops.get_primitive_type_any_atom(ctx)
|
|
||||||
{ctx2, atom_foo_id} = Ops.create_atom_literal_type(ctx1, :foo)
|
|
||||||
|
|
||||||
refute Ops.is_empty_type?(ctx2, any_atom_id)
|
|
||||||
# :foo should be a subtype of AnyAtom (i.e., :foo INTERSECTION (NEGATION AnyAtom) == Empty)
|
|
||||||
# Or, :foo UNION AnyAtom == AnyAtom
|
|
||||||
# Or, :foo INTERSECTION AnyAtom == :foo
|
|
||||||
{ctx3, intersection_foo_any_atom_id} = Ops.intersection_types(ctx2, atom_foo_id, any_atom_id)
|
|
||||||
assert intersection_foo_any_atom_id == atom_foo_id # Check it simplifies to :foo
|
|
||||||
|
|
||||||
# Test original subtype logic: (:foo & (not AnyAtom)) == Empty
|
|
||||||
{ctx4, not_any_atom_id} = Ops.negation_type(ctx3, any_atom_id) # Use ctx3
|
|
||||||
{ctx5, intersection_subtype_check_id} = Ops.intersection_types(ctx4, atom_foo_id, not_any_atom_id)
|
|
||||||
assert Ops.is_empty_type?(ctx5, intersection_subtype_check_id)
|
|
||||||
|
|
||||||
# AnyAtom & AnyInteger should be Empty
|
|
||||||
{ctx6, any_integer_id} = Ops.get_primitive_type_any_integer(ctx5) # Use ctx5
|
|
||||||
{ctx7, atom_int_intersect_id} = Ops.intersection_types(ctx6, any_atom_id, any_integer_id)
|
|
||||||
assert Ops.is_empty_type?(ctx7, atom_int_intersect_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "union_types, intersection_types, negation_type" do
|
|
||||||
test "basic set properties" do
|
|
||||||
ctx0 = init_context()
|
|
||||||
{ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
|
|
||||||
{ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
|
|
||||||
{ctx3, type_c_id} = Ops.create_atom_literal_type(ctx2, :c)
|
|
||||||
{ctx4, nothing_id} = Ops.get_type_nothing(ctx3)
|
|
||||||
|
|
||||||
# A | Nothing = A
|
|
||||||
{ctx5, union_a_nothing_id} = Ops.union_types(ctx4, type_a_id, nothing_id)
|
|
||||||
assert union_a_nothing_id == type_a_id
|
|
||||||
|
|
||||||
# A & Nothing = Nothing
|
|
||||||
{ctx6, intersect_a_nothing_id} = Ops.intersection_types(ctx5, type_a_id, nothing_id)
|
|
||||||
assert intersect_a_nothing_id == nothing_id
|
|
||||||
|
|
||||||
# not (not A) = A
|
|
||||||
{ctx7, not_a_id} = Ops.negation_type(ctx6, type_a_id)
|
|
||||||
{ctx8, not_not_a_id} = Ops.negation_type(ctx7, not_a_id)
|
|
||||||
assert not_not_a_id == type_a_id
|
|
||||||
|
|
||||||
# A | B
|
|
||||||
{ctx9, union_ab_id} = Ops.union_types(ctx8, type_a_id, type_b_id)
|
|
||||||
# (A | B) & A = A
|
|
||||||
{ctx10, intersect_union_a_id} = Ops.intersection_types(ctx9, union_ab_id, type_a_id)
|
|
||||||
assert intersect_union_a_id == type_a_id
|
|
||||||
|
|
||||||
# (A | B) & C = Nothing (if A, B, C are distinct atom literals)
|
|
||||||
{ctx11, intersect_union_c_id} = Ops.intersection_types(ctx10, union_ab_id, type_c_id)
|
|
||||||
assert Ops.is_empty_type?(ctx11, intersect_union_c_id)
|
|
||||||
|
|
||||||
# Commutativity and idempotence of union/intersection are implicitly tested by caching
|
|
||||||
# and canonical key generation in apply_type_op.
|
|
||||||
end
|
|
||||||
|
|
||||||
test "type operations are cached" do
|
|
||||||
ctx0 = init_context()
|
|
||||||
{ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
|
|
||||||
{ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
|
|
||||||
|
|
||||||
# Perform an operation
|
|
||||||
{ctx3, union1_id} = Ops.union_types(ctx2, type_a_id, type_b_id)
|
|
||||||
initial_cache_size = map_size(ctx3.type_store.ops_cache)
|
|
||||||
assert initial_cache_size > 0 # Ensure something was cached
|
|
||||||
|
|
||||||
# Perform the same operation again
|
|
||||||
{ctx4, union2_id} = Ops.union_types(ctx3, type_a_id, type_b_id)
|
|
||||||
assert union1_id == union2_id
|
|
||||||
assert map_size(ctx4.type_store.ops_cache) == initial_cache_size # Cache size should not change
|
|
||||||
|
|
||||||
# Perform with swapped arguments (commutative)
|
|
||||||
{ctx5, union3_id} = Ops.union_types(ctx4, type_b_id, type_a_id)
|
|
||||||
assert union1_id == union3_id
|
|
||||||
assert map_size(ctx5.type_store.ops_cache) == initial_cache_size # Cache size should not change
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,67 +0,0 @@
|
|||||||
defmodule Tilly.Type.StoreTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.Type
|
|
||||||
alias Tilly.Type.Store
|
|
||||||
|
|
||||||
defp init_context do
|
|
||||||
%{}
|
|
||||||
|> BDD.init_bdd_store()
|
|
||||||
|> Store.init_type_store()
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "init_type_store/1" do
|
|
||||||
test "initializes an empty type store in the typing_ctx" do
|
|
||||||
typing_ctx = %{}
|
|
||||||
new_ctx = Store.init_type_store(typing_ctx)
|
|
||||||
type_store = Map.get(new_ctx, :type_store)
|
|
||||||
|
|
||||||
assert type_store.descrs_by_structure == %{}
|
|
||||||
assert type_store.structures_by_id == %{}
|
|
||||||
assert type_store.next_descr_id == 0
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "get_or_intern_descr/2 and get_descr_by_id/2" do
|
|
||||||
test "interns a new Descr map and retrieves it" do
|
|
||||||
typing_ctx = init_context()
|
|
||||||
descr_map1 = Type.empty_descr(typing_ctx) # Uses canonical BDD.false_node_id()
|
|
||||||
|
|
||||||
# Intern first time
|
|
||||||
{ctx1, id1} = Store.get_or_intern_descr(typing_ctx, descr_map1)
|
|
||||||
assert id1 == 0
|
|
||||||
assert Store.get_descr_by_id(ctx1, id1) == descr_map1
|
|
||||||
assert ctx1.type_store.next_descr_id == 1
|
|
||||||
|
|
||||||
# Retrieve existing
|
|
||||||
{ctx2, id1_retrieved} = Store.get_or_intern_descr(ctx1, descr_map1)
|
|
||||||
assert id1_retrieved == id1
|
|
||||||
assert ctx2 == ctx1 # Context should not change if already interned
|
|
||||||
|
|
||||||
# Intern a different Descr map
|
|
||||||
descr_map2 = Type.any_descr(typing_ctx) # Uses canonical BDD.true_node_id()
|
|
||||||
{ctx3, id2} = Store.get_or_intern_descr(ctx2, descr_map2)
|
|
||||||
assert id2 == 1
|
|
||||||
assert Store.get_descr_by_id(ctx3, id2) == descr_map2
|
|
||||||
assert ctx3.type_store.next_descr_id == 2
|
|
||||||
|
|
||||||
# Ensure original is still retrievable
|
|
||||||
assert Store.get_descr_by_id(ctx3, id1) == descr_map1
|
|
||||||
end
|
|
||||||
|
|
||||||
test "get_descr_by_id returns nil for non-existent ID" do
|
|
||||||
typing_ctx = init_context()
|
|
||||||
assert Store.get_descr_by_id(typing_ctx, 999) == nil
|
|
||||||
end
|
|
||||||
|
|
||||||
test "raises an error if type store is not initialized" do
|
|
||||||
uninitialized_ctx = %{}
|
|
||||||
descr_map = Type.empty_descr(uninitialized_ctx)
|
|
||||||
|
|
||||||
assert_raise ArgumentError,
|
|
||||||
"Type store not initialized in typing_ctx. Call init_type_store first.",
|
|
||||||
fn -> Store.get_or_intern_descr(uninitialized_ctx, descr_map) end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ -1,39 +0,0 @@
|
|||||||
defmodule Tilly.TypeTest do
|
|
||||||
use ExUnit.Case, async: true
|
|
||||||
|
|
||||||
alias Tilly.BDD
|
|
||||||
alias Tilly.Type
|
|
||||||
|
|
||||||
describe "empty_descr/1" do
|
|
||||||
test "returns a Descr map with all BDD IDs pointing to false" do
|
|
||||||
typing_ctx = BDD.init_bdd_store(%{})
|
|
||||||
descr = Type.empty_descr(typing_ctx)
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
|
|
||||||
assert descr.atoms_bdd_id == false_id
|
|
||||||
assert descr.integers_bdd_id == false_id
|
|
||||||
assert descr.strings_bdd_id == false_id
|
|
||||||
assert descr.pairs_bdd_id == false_id
|
|
||||||
assert descr.records_bdd_id == false_id
|
|
||||||
assert descr.functions_bdd_id == false_id
|
|
||||||
assert descr.absent_marker_bdd_id == false_id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
describe "any_descr/1" do
|
|
||||||
test "returns a Descr map with BDD IDs pointing to true (and absent_marker to false)" do
|
|
||||||
typing_ctx = BDD.init_bdd_store(%{})
|
|
||||||
descr = Type.any_descr(typing_ctx)
|
|
||||||
true_id = BDD.true_node_id()
|
|
||||||
false_id = BDD.false_node_id()
|
|
||||||
|
|
||||||
assert descr.atoms_bdd_id == true_id
|
|
||||||
assert descr.integers_bdd_id == true_id
|
|
||||||
assert descr.strings_bdd_id == true_id
|
|
||||||
assert descr.pairs_bdd_id == true_id
|
|
||||||
assert descr.records_bdd_id == true_id
|
|
||||||
assert descr.functions_bdd_id == true_id
|
|
||||||
assert descr.absent_marker_bdd_id == false_id
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
55
todo.md
55
todo.md
@ -1,55 +0,0 @@
|
|||||||
1. **Implement Parsing for `(union <type1> <type2> ...)` Type Specifiers:**
|
|
||||||
* Modify `Til.Typer.ExpressionTyper.resolve_type_specifier_node` to recognize and parse S-expressions like `(union integer string)`.
|
|
||||||
* This will involve recursively resolving the inner type specifiers and constructing a raw union type definition. The existing interning and subtyping logic for unions can then be leveraged.
|
|
||||||
* Add tests for type checking expressions annotated with these explicit union types, e.g., `(the (union integer string) some-expression)`.
|
|
||||||
|
|
||||||
3. **Implement Parsing for Basic Function Type Specifiers:**
|
|
||||||
* Modify `Til.Typer.ExpressionTyper.resolve_type_specifier_node` to parse function type specifiers, e.g., `(function (Arg1Type Arg2Type ...) ReturnType)`.
|
|
||||||
* Add interning support for function types in `Til.Typer.Interner`.
|
|
||||||
* Implement basic subtyping rules for function types in `Til.Typer.SubtypeChecker` (initially, arity checking; then contravariant arguments, covariant return).
|
|
||||||
|
|
||||||
4. **Implement Basic Function Definition (e.g., `def` or `lambda`):**
|
|
||||||
* Define syntax (e.g., `(def my-fn (param1 param2) body-expr)`).
|
|
||||||
* Add parser support for this syntax.
|
|
||||||
* Typer:
|
|
||||||
* For this initial step, infer a basic function type (e.g., based on arity, with `any` for parameter and return types if not annotated).
|
|
||||||
* Add the function name and its inferred type to the environment.
|
|
||||||
|
|
||||||
5. **Implement Basic Function Calls:**
|
|
||||||
* Extend `Til.Typer.ExpressionTyper.infer_s_expression_type` for function calls:
|
|
||||||
* When the operator of an S-expression is a symbol, look up its type in the environment.
|
|
||||||
* If it's a function type (from step 4), perform an arity check against the provided arguments.
|
|
||||||
* The inferred type of the call would be the function's (currently basic) return type.
|
|
||||||
|
|
||||||
6. **Enhance Function Definitions with Type Annotations:**
|
|
||||||
* Extend the function definition syntax to support type annotations for parameters and return types (e.g., `(def my-fn ((p1 P1Type) (p2 P2Type)) :: ReturnType body-expr)`).
|
|
||||||
* Update the parser for this extended syntax.
|
|
||||||
* Typer:
|
|
||||||
* Use these annotations to construct a more precise function type.
|
|
||||||
* When typing the function body, use the annotated parameter types in the local environment.
|
|
||||||
* Verify that the inferred type of the function body is a subtype of the annotated return type.
|
|
||||||
* Update function call typing (from step 5) to use these precise function types for argument type checking and to determine the call's return type.
|
|
||||||
|
|
||||||
7. **Implement Type Inference for Core Map Operation: `(map-get map key)`:**
|
|
||||||
* Define the S-expression syntax `(map-get map-expr key-expr)`.
|
|
||||||
* In `Til.Typer.ExpressionTyper`, implement type inference rules for `map-get` based on the logic outlined in `todo.md`. This includes:
|
|
||||||
* Typing `map-expr` and `key-expr`.
|
|
||||||
* Handling cases where `key-expr`'s type is a literal (allowing lookup in `known_elements`).
|
|
||||||
* Handling cases where `key-expr`'s type is a general type (using `index_signature` and potentially unioning types from `known_elements`).
|
|
||||||
|
|
||||||
8. **Improve User-Facing Type Error Messages:**
|
|
||||||
* For common errors like `type_annotation_mismatch` or function call argument mismatches, enhance the error reporting.
|
|
||||||
* Develop a utility to pretty-print type definitions (from their internal map representation or ID) for inclusion in error messages, making them more readable than raw type IDs or structures.
|
|
||||||
* Ensure source locations (file, line, column) are clearly associated with type errors.
|
|
||||||
|
|
||||||
9. **Implement Parsing for `(intersection <type1> <type2> ...)` Type Specifiers:**
|
|
||||||
* Similar to union types, update `Til.Typer.ExpressionTyper.resolve_type_specifier_node` for intersection type S-expressions.
|
|
||||||
* Add interning and subtyping rules for intersection types in `Til.Typer.Interner` and `Til.Typer.SubtypeChecker`.
|
|
||||||
|
|
||||||
10. **Implement Simple Type Aliases (e.g., `deftype`):**
|
|
||||||
* Define syntax for non-generic type aliases (e.g., `(deftype PositiveInteger (refinement integer ...))` or `(deftype UserMap (map atom any))`).
|
|
||||||
* Add parser support.
|
|
||||||
* Typer:
|
|
||||||
* Store these alias definitions.
|
|
||||||
* Modify `Til.Typer.ExpressionTyper.resolve_type_specifier_node` to recognize and expand these aliases when they are used in type annotations.
|
|
||||||
|
|
||||||
Loading…
x
Reference in New Issue
Block a user