Compare commits

...

42 Commits

Author SHA1 Message Date
Kacper Marzecki
dc34827847 deps 2025-07-15 00:59:30 +02:00
Kacper Marzecki
9a89757bcd fix test
spec = {:mu, :X, {:union, [{:type_var, :X}, :integer]}}
assert_equivalent_specs(spec, :integer)
2025-07-15 00:59:22 +02:00
Kacper Marzecki
9c6b3998c2 fixed one test 3 to go 2025-07-14 23:04:45 +02:00
Kacper Marzecki
b5a731edad one less test failing 2025-07-14 22:30:53 +02:00
Kacper Marzecki
2854c4559c added more FAILING tests 2025-07-12 13:08:16 +02:00
Kacper Marzecki
3ea8d80801 checkpoint, some test pass 2025-07-12 13:08:07 +02:00
Kacper Marzecki
c51c35bfe4 debug 2025-07-12 12:54:02 +02:00
Kacper Marzecki
214b4b06ac cleanup comments 2025-07-12 12:29:31 +02:00
Kacper Marzecki
61e95be622 checkpoint 2025-07-12 12:23:14 +02:00
Kacper Marzecki
e5485995ed checkpoint 2025-07-12 01:04:06 +02:00
Kacper Marzecki
c2c7438d32 switch to mix project for better tests 2025-07-11 21:45:31 +02:00
Kacper Marzecki
976f8250e3 asdasdasdasd 2025-07-11 21:00:14 +02:00
Kacper Marzecki
d7f1a7a141 checkpoint dangerzone 1 2025-06-30 00:34:51 +02:00
Kacper Marzecki
90b063eb30 WTD AM I DOING, REVERT TO THE PREVIOUS COMMIT IF SMTH GOES WRONG 2025-06-29 23:47:12 +02:00
Kacper Marzecki
5972da6ecc checkpoint idk 2025-06-29 20:26:26 +02:00
Kacper Marzecki
324735710b checkpoint sleep 2025-06-24 01:06:40 +02:00
Kacper Marzecki
439747daf0 dirty hack fix 2025-06-24 00:43:41 +02:00
Kacper Marzecki
552ef6ce3b checkpoint better rec types 2025-06-23 23:50:30 +02:00
Kacper Marzecki
07d92aed97 cleanup 2025-06-23 19:26:04 +02:00
Kacper Marzecki
6c243439a9 trace 2025-06-23 17:34:51 +02:00
Kacper Marzecki
a42c981fa0 checkpoint but debug is still kinda fucked 2025-06-21 15:16:16 +02:00
Kacper Marzecki
8a6f84238e kinda works 2025-06-20 17:39:23 +02:00
Kacper Marzecki
bb7187b0c7 some progress, still fucked 2025-06-20 16:43:19 +02:00
Kacper Marzecki
3c7edc67da fuckd 2025-06-20 16:19:02 +02:00
Kacper Marzecki
8d8b3607fc checkpoint, still fucked debugging 2025-06-20 15:52:18 +02:00
Kacper Marzecki
a78fe0541a wip debug 2025-06-20 10:28:53 +02:00
Kacper Marzecki
f1243084c7 checkpoint docs 2025-06-19 18:42:34 +02:00
Kacper Marzecki
cb8dddc326 chat 2025-06-19 13:17:18 +02:00
Kacper Marzecki
646afaad64 fixpoint discussion 2025-06-19 02:35:42 +02:00
Kacper Marzecki
658332ace1 some progress with recursive checks 2025-06-19 02:34:34 +02:00
Kacper Marzecki
bcddae26cb checkpoint before next attempt at recursive types - noice debugging here 2025-06-19 01:49:08 +02:00
Kacper Marzecki
ff557cb221 checkpoint 2025-06-19 01:03:09 +02:00
Kacper Marzecki
176a4c6a04 maybe?? revert if not recursive works 2025-06-18 21:14:22 +02:00
Kacper Marzecki
7fa469518d consistency fixed 2025-06-18 16:13:29 +02:00
Kacper Marzecki
5736c30fa2 fix consistency checker 2025-06-18 16:01:50 +02:00
Kacper Marzecki
36f63ed355 checkpoint before fixing consistency tests 2025-06-18 15:56:12 +02:00
Kacper Marzecki
130d369734 checkpoint 2025-06-18 15:44:49 +02:00
Kacper Marzecki
a5e5127bcd reconstructor & normalization 2025-06-18 15:10:58 +02:00
Kacper Marzecki
9f6cd2814a checkpoint 2025-06-18 15:00:36 +02:00
Kacper Marzecki
0880bb338f checkpoint new implementation 2025-06-18 14:29:40 +02:00
Kacper Marzecki
967eb93aa8 checkpoint 2025-06-17 19:48:50 +02:00
Kacper Marzecki
8b6171283d checkpoint 2025-06-17 12:56:04 +02:00
38 changed files with 11081 additions and 4474 deletions

View File

@ -57438,3 +57438,141 @@ defmodule TupleTestHelpers do
> Tokens: 25k sent, 457 received. Cost: $0.04 message, $0.08 session.
> Applied edit to test.exs
#### /clear
#### /ask I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it
#### I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it
Okay, let's design a conditional logging mechanism for detailed tracing.
**Design Proposal: Conditional Debug Tracing**
We'll introduce a tracing facility within the existing `Tdd.Debug` module. This facility will be controlled by a flag in the process dictionary, allowing tests to enable detailed logging on demand.
**1. Control Mechanism (in `Tdd.Debug` module):**
* **Process Flag:** A key, for example, `@trace_flag :tdd_debug_trace_enabled`, will be used with `Process.put/2` and `Process.get/2`.
* **Control Functions:**
* `enable_trace/0`: Sets the process flag to `true`.
```elixir
# In Tdd.Debug
@trace_flag :tdd_debug_trace_enabled
def enable_trace, do: Process.put(@trace_flag, true)
```
* `disable_trace/0`: Sets the process flag to `false` (or removes it).
```elixir
# In Tdd.Debug
def disable_trace, do: Process.put(@trace_flag, false)
```
* `trace_enabled?/0`: Checks the state of the flag. This can be private if only used by the macros.
```elixir
# In Tdd.Debug
defp trace_enabled?, do: Process.get(@trace_flag, false) == true
```
**2. Logging Macros (in `Tdd.Debug` module):**
To ensure that potentially expensive operations (like data inspection or string formatting) are only performed when tracing is active, the macros will accept zero-arity functions that produce the content to be logged.
* `dlog_msg(message_producer_fun)`: For logging simple string messages.
* `message_producer_fun`: A `fn -> "message string" end`.
* *Behavior*: If tracing is enabled, it executes `message_producer_fun.()` and prints the result using `IO.puts/1`.
```elixir
# In Tdd.Debug
defmacro dlog_msg(message_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
IO.puts(unquote(message_producer_fun).())
end
end
end
```
* `dlog(label, data_producer_fun)`: For inspecting data with a label.
* `label`: An atom or string for `IO.inspect/2`.
* `data_producer_fun`: A `fn -> data_to_inspect end`.
* *Behavior*: If tracing is enabled, it executes `data_producer_fun.()` and inspects the result using `IO.inspect(result, label: label)`.
```elixir
# In Tdd.Debug
defmacro dlog(label, data_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
IO.inspect(unquote(data_producer_fun).(), label: unquote(label))
end
end
end
```
* `dlog_action(action_producer_fun)`: For executing an action (like calling `Tdd.Debug.print/1`) only when tracing is enabled.
* `action_producer_fun`: A `fn -> perform_io_action() end`.
* *Behavior*: If tracing is enabled, it executes `action_producer_fun.()`.
```elixir
# In Tdd.Debug
defmacro dlog_action(action_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
unquote(action_producer_fun).()
:ok # Ensure the macro expression has a consistent return value
end
end
end
```
**3. Usage Example:**
* **In a specific test case:**
```elixir
defmodule MyProblematicTest do
use ExUnit.Case
alias Tdd.Debug
setup do
Debug.enable_trace()
on_exit(fn -> Debug.disable_trace() end) # Ensure cleanup
:ok
end
test "scenario requiring detailed trace" do
# ... test code that calls into TDD modules ...
assert true # placeholder
end
end
```
* **In core logic (e.g., `Tdd.Compiler.loop_until_stable/3`):**
The existing `IO.puts` and `IO.inspect` calls can be converted.
```elixir
# In Tdd.Compiler, after aliasing or requiring Tdd.Debug
# alias Tdd.Debug
defp loop_until_stable(prev_id, step_function, iteration \\ 0) do
Debug.dlog_msg(fn -> "\n--- Fixed-Point Iteration: #{iteration} ---" end)
Debug.dlog("prev_id", fn -> prev_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(prev_id) end) # For detailed graph view
raw_next_id = step_function.(prev_id)
Debug.dlog("raw_next_id (after step_function)", fn -> raw_next_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(raw_next_id) end)
next_id = Algo.simplify(raw_next_id)
Debug.dlog("next_id (after simplify)", fn -> next_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(next_id) end)
if next_id == prev_id do
Debug.dlog_msg(fn -> "--- Fixed-Point Reached! ---" end)
next_id
else
# ... existing safety break and recursive call ...
if iteration > 2 do # Example existing condition
# ...
end
loop_until_stable(next_id, step_function, iteration + 1)
end
end
```
This design provides a flexible way to get detailed traces for specific scenarios without affecting the performance or output of regular test runs. The use of functions ensures lazy evaluation of potentially costly debug message formatting.
> Tokens: 32k sent, 1.3k received. Cost: $0.05 message, $0.12 session.

View File

@ -4185,3 +4185,9 @@
# 2025-06-16 11:51:48.065221
+n
# 2025-06-19 10:22:37.019564
+/clear
# 2025-06-19 10:25:14.825350
+/ask I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it

554
debug.exs Normal file
View File

@ -0,0 +1,554 @@
defmodule Tdd.Debug.TracerData do
@moduledoc """
Provides functions to sanitize Elixir terms for tracing and JSON serialization.
"""
@max_sanitize_depth 5
def sanitize_values(list) when is_list(list) do
Enum.map(list, &sanitize_value/1)
end
def sanitize_value(val) do
do_sanitize(val, @max_sanitize_depth)
end
defp do_sanitize(val, _depth_limit) when is_integer(val) or is_float(val) or is_atom(val) or is_boolean(val), do: val
defp do_sanitize(val, _depth_limit) when is_binary(val), do: val # Assume strings are fine
defp do_sanitize(nil, _depth_limit), do: nil
defp do_sanitize(val, _depth_limit) when is_pid(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_port(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_reference(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_function(val) do
try do
arity = Function.info(val, :arity) |> elem(1)
case Function.info(val, :name) do
{:name, name} when is_atom(name) and name != :"" -> "<Function #{Atom.to_string(name)}/#{arity}>"
_ -> "<Function/#{arity}>"
end
catch
_, _ -> "<Function>"
end
end
defp do_sanitize(val, depth_limit) when is_list(val) do
if depth_limit <= 0 do
"[...trimmed_list...]"
else
Enum.map(val, &do_sanitize(&1, depth_limit - 1))
end
end
defp do_sanitize(val, depth_limit) when is_map(val) do
if depth_limit <= 0 do
"%{...trimmed_map...}"
else
if Map.has_key?(val, :__struct__) do
module = val.__struct__
# Sanitize only exposed fields, not internal :__meta__ or the :__struct__ key itself
data_to_sanitize = val |> Map.delete(:__struct__) |> Map.delete(:__meta__)
sanitized_fields =
data_to_sanitize
|> Enum.map(fn {k, v} -> {k, do_sanitize(v, depth_limit - 1)} end)
|> Enum.into(%{})
%{type: :struct, module: module, fields: sanitized_fields}
else # Regular map
val
|> Enum.map(fn {k, v} -> {do_sanitize(k, depth_limit - 1), do_sanitize(v, depth_limit - 1)} end)
|> Enum.into(%{})
end
end
end
defp do_sanitize(val, depth_limit) when is_tuple(val) do
if depth_limit <= 0 do
"{...trimmed_tuple...}"
else
val |> Tuple.to_list() |> Enum.map(&do_sanitize(&1, depth_limit - 1)) |> List.to_tuple()
end
end
# Fallback for other types (e.g., DateTime, Date, Time, NaiveDateTime)
defp do_sanitize(val, _depth_limit) do
# For common structs that have `to_string` or are Calendar types
cond do
is_struct(val, DateTime) or is_struct(val, Date) or is_struct(val, Time) or is_struct(val, NaiveDateTime) ->
inspect(val) # Jason can handle these if they are ISO8601 strings
true ->
# Generic struct or unknown type
if is_struct(val) do
"Struct<#{inspect(val.__struct__)}>" # A simpler representation
else
inspect(val) # Fallback, converts to string
end
end
end
def sanitize_error(exception_instance, stacktrace) do
%{
type: :error,
class: Atom.to_string(exception_instance.__struct__),
message: Exception.message(exception_instance),
# Sanitize stacktrace to keep it manageable
stacktrace: Enum.map(stacktrace, &Exception.format_stacktrace_entry/1) |> Enum.take(15)
}
end
end
defmodule Tdd.Debug do
@moduledoc """
Provides macros to wrap `def` and `defp` for function call/return tracing.
Logs arguments and return values using `IO.inspect`.
Builds an in-memory call tree data structure per traced process.
"""
# --- Agent for Tracing State ---
@agent_name Tdd.Debug.StateAgent
def init_agent_if_needed do
case Process.whereis(@agent_name) do
nil -> Agent.start_link(fn -> MapSet.new() end, name: @agent_name)
_pid -> :ok
end
:ok
end
def add_traced_pid(pid) when is_pid(pid) do
init_agent_if_needed()
Agent.update(@agent_name, &MapSet.put(&1, pid))
end
def remove_traced_pid(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil -> :ok
agent_pid -> Agent.cast(agent_pid, fn state -> MapSet.delete(state, pid) end)
end
end
def is_pid_traced?(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil ->
false
agent_pid ->
try do
Agent.get(agent_pid, &MapSet.member?(&1, pid), :infinity)
rescue
_e in [Exit, ArgumentError] -> false # Agent might have died
end
end
end
# --- Tracing Data Storage ---
@doc """
Initializes the tracing data structures in the process dictionary.
"""
def init_trace_storage do
Process.put(:tdd_debug_call_stack, [])
Process.put(:tdd_debug_session_roots, [])
:ok
end
@doc """
Retrieves the collected trace data (a list of root call tree nodes)
for the current process and clears it from the process dictionary.
Children within each node and the root list itself are reversed to be in chronological order.
"""
def get_and_clear_trace_data do
data = Process.get(:tdd_debug_session_roots, [])
# Reverse children recursively and then reverse the list of roots
processed_data = Enum.map(data, &reverse_children_recursively/1) |> Enum.reverse()
Process.delete(:tdd_debug_session_roots)
Process.delete(:tdd_debug_call_stack)
processed_data
end
defp reverse_children_recursively(node) do
if !node.children or node.children == [] do
node
else
reversed_and_processed_children =
Enum.map(node.children, &reverse_children_recursively/1)
|> Enum.reverse() # Children were added prepending, so reverse for chronological
%{node | children: reversed_and_processed_children}
end
end
# --- Tracing Control Functions ---
@doc "Enables function call tracing for the current process."
def enable_tracing do
pid_to_trace = self()
add_traced_pid(pid_to_trace)
init_trace_storage() # Initialize storage for call trees
ref = Process.monitor(pid_to_trace)
Process.spawn(fn ->
receive do
{:DOWN, ^ref, :process, ^pid_to_trace, _reason} ->
remove_traced_pid(pid_to_trace)
after
3_600_000 -> # 1 hour safety timeout
remove_traced_pid(pid_to_trace)
end
end, [:monitor]) # Changed from [:monitor] to [] as monitor is implicit with Process.monitor
:ok
end
@doc "Disables function call tracing for the current process."
def disable_tracing do
remove_traced_pid(self())
# Note: Does not clear trace data, get_and_clear_trace_data() does that.
:ok
end
@doc """
Runs the given 0-arity function with tracing enabled.
Returns a tuple: `{{:ok, result} | {:error, {exception, stacktrace}}, trace_data}`.
Trace data is a list of call tree root nodes.
"""
def run(fun) when is_function(fun, 0) do
enable_tracing()
outcome =
try do
{:ok, fun.()}
rescue
kind -> {:error, {kind, __STACKTRACE__}}
end
trace_data = get_and_clear_trace_data()
disable_tracing()
# trace_data
# |> IO.inspect()
binary = JSON.encode!(trace_data)
File.write("trace.json", binary)
{outcome, trace_data}
end
# --- Process Dictionary for Call Depth (used for printing indent) ---
defp get_depth, do: Process.get(:tdd_debug_depth, 0)
def increment_depth do
new_depth = get_depth() + 1
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
def decrement_depth do
new_depth = max(0, get_depth() - 1)
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
# --- Core Macro Logic ---
defmacro __using__(_opts) do
quote do
import Kernel, except: [def: 1, def: 2, defp: 1, defp: 2]
import Tdd.Debug # Import this module's public functions/macros
require Tdd.Debug # Ensure this module is compiled for macros
end
end
@doc false
defmacro def(call, clauses \\ Keyword.new()) do
generate_traced_function(:def, call, clauses, __CALLER__)
end
@doc false
defmacro defp(call, clauses \\ Keyword.new()) do
generate_traced_function(:defp, call, clauses, __CALLER__)
end
defp is_simple_variable_ast?(ast_node) do
case ast_node do
{var_name, _meta, context_module} when is_atom(var_name) and (context_module == nil or is_atom(context_module)) ->
var_name != :_ # Ensure it's not the underscore atom itself
_ ->
false
end
end
defp generate_traced_function(type, call_ast, clauses, caller_env) do
require Macro
{function_name_ast, meta_call, original_args_patterns_ast_nullable} = call_ast
original_args_patterns_ast_list = original_args_patterns_ast_nullable || []
original_body_ast =
(if Keyword.keyword?(clauses) do
Keyword.get(clauses, :do, clauses)
else
clauses
end) || quote(do: nil)
# --- Argument Handling for Logging (Existing Logic) ---
mapped_and_generated_vars_tuples =
Enum.map(Enum.with_index(original_args_patterns_ast_list), fn {original_pattern_ast, index} ->
td_arg_var = Macro.var(String.to_atom("__td_arg_#{index}__"), nil)
{final_pattern_for_head, rhs_for_td_arg_assignment} =
case original_pattern_ast do
{:_, _, _} = underscore_ast -> {underscore_ast, quote(do: :__td_ignored_argument__)}
{:=, _meta_assign, [lhs_of_assign, _rhs_of_assign]} = assignment_pattern_ast ->
if is_simple_variable_ast?(lhs_of_assign) do
{assignment_pattern_ast, lhs_of_assign}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_assign_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(assignment_pattern_ast) end
{new_head_pattern, captured_val_var}
end
{:\\, _meta_default, [pattern_before_default, _default_value_ast]} = default_arg_pattern_ast ->
cond do
is_simple_variable_ast?(pattern_before_default) ->
{default_arg_pattern_ast, pattern_before_default}
match?({:=, _, [lhs_inner_assign, _]}, pattern_before_default) and is_simple_variable_ast?(pattern_before_default |> elem(2) |> Enum.at(0)) ->
{:=, _, [lhs_inner_assign, _]} = pattern_before_default
{default_arg_pattern_ast, lhs_inner_assign}
true ->
captured_val_var = Macro.unique_var(String.to_atom("tdc_def_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(default_arg_pattern_ast) end
{new_head_pattern, captured_val_var}
end
ast_node ->
if is_simple_variable_ast?(ast_node) do
{ast_node, ast_node}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_pat_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(ast_node) end
{new_head_pattern, captured_val_var}
end
end
assignment_ast = quote do unquote(td_arg_var) = unquote(rhs_for_td_arg_assignment) end
{final_pattern_for_head, assignment_ast, td_arg_var}
end)
{new_args_patterns_for_head_list, assignments_for_logging_vars_ast_list, generated_vars_to_log_asts} =
if mapped_and_generated_vars_tuples == [] do
{[], [], []}
else
# Transpose list of 3-element tuples into 3 lists
list_of_lists = Enum.map(mapped_and_generated_vars_tuples, &Tuple.to_list(&1))
[
Enum.map(list_of_lists, &List.first(&1)),
Enum.map(list_of_lists, &Enum.at(&1, 1)),
Enum.map(list_of_lists, &Enum.at(&1, 2))
]
|> then(fn [a,b,c] -> {a,b,c} end)
end
new_call_ast = {function_name_ast, meta_call, new_args_patterns_for_head_list}
# --- Variables for Runtime Info ---
# These vars will hold string versions of module/function name and arity at runtime
# Hygienic vars to avoid collisions.
module_name_runtime_var = Macro.var(:__td_module_name__, __MODULE__)
printable_fn_name_runtime_var = Macro.var(:__td_printable_fn_name__, __MODULE__)
arity_runtime_var = Macro.var(:__td_arity__, __MODULE__)
# Arity calculation at macro expansion time
arity_value = length(original_args_patterns_ast_list)
traced_body_inner_ast =
quote do
# --- Resolve Module/Function/Arity Info at Runtime ---
unquote(module_name_runtime_var) = Atom.to_string(unquote(caller_env.module))
unquote(arity_runtime_var) = unquote(arity_value)
runtime_resolved_fn_name_val = unquote(function_name_ast) # Resolve var if func name is from a var
unquote(printable_fn_name_runtime_var) =
if is_atom(runtime_resolved_fn_name_val) do
Atom.to_string(runtime_resolved_fn_name_val)
else
Macro.to_string(runtime_resolved_fn_name_val) # For unquote(var) or operators
end
# --- Main Tracing Logic ---
if Tdd.Debug.is_pid_traced?(self()) do
# --- On Entry: Prepare Data & Log ---
unquote_splicing(assignments_for_logging_vars_ast_list) # Assign __td_arg_N__ vars
runtime_arg_values_for_node_and_log = [unquote_splicing(generated_vars_to_log_asts)]
# sanitized_args_for_node = Tdd.Debug.TracerData.sanitize_values(runtime_arg_values_for_node_and_log)
sanitized_args_for_node = inspect(runtime_arg_values_for_node_and_log)
current_call_print_depth = Tdd.Debug.increment_depth() # For print indent & node depth
# Create placeholder node, push to call_stack
new_node_details = %{
id: System.unique_integer([:positive, :monotonic]),
# module: unquote(module_name_runtime_var),
# function: unquote(printable_fn_name_runtime_var),
function: "#{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}",
# arity: unquote(arity_runtime_var),
args: sanitized_args_for_node,
depth: current_call_print_depth,
# timestamp_enter_monotonic: System.monotonic_time(),
children: [], # Will be populated in reverse chronological order
result: :__td_not_returned_yet__ # Placeholder
}
Process.put(:tdd_debug_call_stack, [new_node_details | Process.get(:tdd_debug_call_stack, [])])
# Logging Call (existing logic)
indent = String.duplicate(" ", current_call_print_depth - 1)
IO.puts("#{indent}CALL: #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}")
IO.puts("#{indent} ARGS: #{inspect(runtime_arg_values_for_node_and_log)}")
try do
# --- Execute Original Body ---
result_val = unquote(original_body_ast)
# --- On Normal Exit: Finalize Node & Log ---
[completed_node_placeholder | parent_stack_nodes] = Process.get(:tdd_debug_call_stack)
ts_exit_monotonic = System.monotonic_time()
# duration_ms = System.convert_time_unit(ts_exit_monotonic - completed_node_placeholder.timestamp_enter_monotonic, :native, :millisecond)
sanitized_result_val = Tdd.Debug.TracerData.sanitize_value(result_val)
finalized_node =
completed_node_placeholder
# |> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic)
# |> Map.put(:duration_ms, duration_ms)
|> Map.put(:result, %{type: :ok, value: sanitized_result_val})
_ = Tdd.Debug.decrement_depth() # For print indent
# Update call stack: Add finalized_node to parent's children or session_roots
new_call_stack_after_success =
case parent_stack_nodes do
[parent_on_stack | grand_parent_stack_nodes] ->
updated_parent = %{parent_on_stack | children: [finalized_node | parent_on_stack.children]}
[updated_parent | grand_parent_stack_nodes]
[] -> # This was a root call
Process.put(:tdd_debug_session_roots, [finalized_node | Process.get(:tdd_debug_session_roots, [])])
[] # New call_stack is empty for this branch
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_success)
# Logging Return
IO.puts("#{indent}RETURN from #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(result_val)}")
result_val # Return actual result
rescue
exception_class_err ->
# --- On Error Exit: Finalize Node & Log ---
error_instance_err = exception_class_err
stacktrace_err = __STACKTRACE__
[erroring_node_placeholder | parent_stack_nodes_err] = Process.get(:tdd_debug_call_stack)
ts_exit_monotonic_err = System.monotonic_time()
duration_ms_err = System.convert_time_unit(ts_exit_monotonic_err - erroring_node_placeholder.timestamp_enter_monotonic, :native, :millisecond)
sanitized_error_info = Tdd.Debug.TracerData.sanitize_error(error_instance_err, stacktrace_err)
finalized_error_node =
erroring_node_placeholder
|> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic_err)
|> Map.put(:duration_ms, duration_ms_err)
|> Map.put(:result, sanitized_error_info)
_ = Tdd.Debug.decrement_depth() # For print indent
# Update call stack: Add finalized_error_node to parent's children or session_roots
new_call_stack_after_error =
case parent_stack_nodes_err do
[parent_on_stack_err | grand_parent_stack_nodes_err] ->
updated_parent_err = %{parent_on_stack_err | children: [finalized_error_node | parent_on_stack_err.children]}
[updated_parent_err | grand_parent_stack_nodes_err]
[] -> # This was a root call that errored
Process.put(:tdd_debug_session_roots, [finalized_error_node | Process.get(:tdd_debug_session_roots, [])])
[] # New call_stack is empty for this branch
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_error)
# Logging Error
IO.puts("#{indent}ERROR in #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(error_instance_err)}")
reraise error_instance_err, stacktrace_err # Reraise the original error
end
else
# --- Not Traced: Execute Original Body Directly ---
unquote(original_body_ast)
end
end
# --- Final Definition ---
final_definition_ast =
quote location: :keep do
Kernel.unquote(type)(
unquote(new_call_ast),
do: unquote(traced_body_inner_ast)
)
end
final_definition_ast
end
# --- TDD Graph Printing (Kept as it was, assuming it's for a different purpose) ---
@doc "Prints a formatted representation of a TDD graph structure."
def print_tdd_graph(id, store_module \\ Tdd.Store) do
IO.puts("--- TDD Graph (ID: #{id}) ---")
do_print_tdd_node(id, 0, MapSet.new(), store_module)
IO.puts("------------------------")
end
defp do_print_tdd_node(id, indent_level, visited, store_module) do
prefix = String.duplicate(" ", indent_level)
if MapSet.member?(visited, id) do
IO.puts("#{prefix}ID #{id} -> [Seen, recursive link]")
:ok
else
new_visited = MapSet.put(visited, id)
case store_module.get_node(id) do # Assumes store_module.get_node/1 exists
{:ok, :true_terminal} -> IO.puts("#{prefix}ID #{id} -> TRUE")
{:ok, :false_terminal} -> IO.puts("#{prefix}ID #{id} -> FALSE")
{:ok, {var, y_id, n_id, dc_id}} ->
IO.puts("#{prefix}ID #{id}: IF #{inspect(var)}")
IO.puts("#{prefix} ├─ Yes (to ID #{y_id}):")
do_print_tdd_node(y_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} ├─ No (to ID #{n_id}):")
do_print_tdd_node(n_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} └─ DC (to ID #{dc_id}):")
do_print_tdd_node(dc_id, indent_level + 2, new_visited, store_module)
{:error, reason} ->
IO.puts("#{prefix}ID #{id}: ERROR - #{reason}")
end
end
end
@doc "Logs a value with a label, using IO.inspect."
def log(value, label, opts \\ []) do
# You can add a flag here to disable all logging globally
if true do
# Default options for better visibility
default_opts = [width: 120, pretty: true]
final_opts = Keyword.merge(default_opts, opts)
IO.inspect(value, [{:label, "[DEBUG] #{label}"} | final_opts])
end
value # Return the original value to allow piping
end
@doc "Gets and increments a depth counter for tracing recursion."
defp get_depth() do
depth = Process.get(:debug_depth, 0)
Process.put(:debug_depth, depth + 1)
String.duplicate(" ", depth)
end
@doc "Decrements the depth counter."
defp dec_depth() do
depth = Process.get(:debug_depth, 1) |> Kernel.-(1) |> max(0)
Process.put(:debug_depth, depth)
end
@doc "Logs a message with indentation for tracing recursion."
def log_entry(label) do
prefix = get_depth()
IO.inspect(prefix, label: "PREFIX")
IO.puts( "#{prefix}[DEBUG] >> #{label}")
end
@doc "Logs a return value with indentation."
def log_exit( value, label) do
dec_depth()
prefix = String.duplicate(" ", Process.get(:debug_depth, 0))
IO.inspect(value, label: prefix <> "[DEBUG] << #{label}")
value # Return the value
end
end

10
debug.md Normal file
View File

@ -0,0 +1,10 @@
i have a compiler Im writing in elixir. I need to trace execution of logic to pinpoint bugs.
I have many tests I want to debug individually but I can run only the full test suite.
I want to create a couple of macros/functions that'd enable me to debug my code.
the scenario I imagine:
before the test I want to debug I write `Tdd.Debug.enable` and then :
as the code is being executed, it prints out a tree of called functions, their arguments and return values, effectively building a stacktrace as its being executed.
first lets design how we'd print this information and if/what should we store in memory to be able to render it in a shape sufficient for good debugging.
Then lets design an elixir module that compiler modules would 'use'
e.g. a Tdd.Debug module that'd have and override for elixir `def`.

175
fixpoint.md Normal file
View File

@ -0,0 +1,175 @@
You are absolutely right. My apologies. I was so focused on the coinductive base case that I missed the more subtle issue your "FIX" comment was hinting at. The change I proposed (`Store.true_node_id()`) is a common approach, but your codebase has a different, equally valid coinductive mechanism that I overlooked.
Let's analyze your version of `do_simplify` and the remaining failures. Your implementation is actually very clever.
### Deconstructing Your `do_simplify`
Your version has two key features that differ from a standard implementation:
1. **Coinductive Base Case:** `if MapSet.member?(context, current_state), do: Store.false_node_id()`. This sets the base case for a recursive loop to `false` (or `none`). This is the standard for **Greatest Fixed-Point (GFP)** semantics. It's used for proving properties on potentially infinite structures (like streams) where you must show that no "bad" prefix exists. An empty loop means the property holds vacuously.
2. **Implication Logic:**
```elixir
assumptions_imply_true ->
do_simplify(y, sorted_assumptions, new_context)
```
This is the critical part. When the current set of assumptions *already implies* that the variable `var` must be true, you correctly follow the `yes` path **without adding `{var, true}` to the assumption set**. This prevents the assumption set from growing with redundant information, which is essential for the `context` check (`MapSet.member?`) to detect a true loop.
### The Real Root Cause
The problem is the mismatch between the type of recursion we have and the type of coinductive proof we're using.
* **`list_of(E)`:** This is defined with a union (`[] | cons(...)`). This is a **Least Fixed-Point (LFP)** type. It describes finite structures built from the ground up. To prove `x` is in `list_of(E)`, you need to show it can be constructed in a finite number of steps. A coinductive proof for this should have a base case of **TRUE**.
* **Our `do_simplify`:** It uses a GFP strategy (`base case = FALSE`). This is why it's failing.
**Why did changing the base case to `TRUE` not work in my previous suggestion?**
Because your `is_subtype?` check is `A & !B == none`. The `!` negation flips the logic. `!list_of(E)` is no longer a simple LFP type. It becomes a complex predicate involving both intersections and unions. When `simplify` tries to unroll `!list_of(E)`, it's working with a GFP-like structure. My previous diagnosis was correct in spirit but missed how negation interacts with your proof strategy.
**Let's analyze the failures with this new understanding:**
1. **`FAIL: list_of(supertype) </: list_of(subtype)` gets `true`**
* We check `list_of(int) & !list_of(1) == none`.
* The algorithm tries to prove this intersection is empty. It unrolls the recursion.
* It enters a loop while checking the `tail` against the `tail`.
* The loop `{tdd_id, assumptions}` is detected.
* `do_simplify` returns `Store.false_node_id()`.
* This `false` means the intersection is empty, so `is_subtype?` returns `true`. This is incorrect. The intersection should be non-empty.
2. **`FAIL: cons(atom, []) <: list_of(atom)` gets `false`**
* We check `cons(atom,[]) & !list_of(atom) == none`.
* The algorithm correctly follows the path for `cons(atom, [])`.
* It checks this against the `!list_of(atom)` TDD.
* Because `!list_of(atom)` is a complex, negated recursive type, our simple GFP solver isn't powerful enough to fully reduce the graph to `none`. It terminates with a non-`none` graph.
* `is_subtype?` returns `false`. This is incorrect.
### The Correct and Final Solution
The problem is that a single `simplify` function with a fixed coinductive strategy (`GFP` or `LFP`) cannot handle both positive and negative recursive types correctly. The standard solution in advanced model checkers and type systems is to have two separate functions, often called `Check` (for GFP) and `Unfold` (for LFP).
However, we can simulate this with a single, elegant change. We can parameterize `do_simplify` to tell it which fixed-point semantic to use.
Let's modify `Algo.simplify` and `Tdd.Compiler`'s subtyping check.
**Step 1: Parameterize `do_simplify`**
In `lib/tdd/algo.ex`:
```elixir
defmodule Tdd.Algo do
# ...
# The public simplify function is unchanged. It defaults to GFP (:gfp).
@spec simplify(non_neg_integer(), map()) :: non_neg_integer()
def simplify(tdd_id, assumptions \\ %{}) do
simplify(tdd_id, assumptions, :gfp) # Default to Greatest Fixed-Point
end
# NEW: Public function with an explicit fixed-point strategy
@spec simplify(non_neg_integer(), map(), :gfp | :lfp) :: non_neg_integer()
def simplify(tdd_id, assumptions, fp_strategy) do
sorted_assumptions = Enum.sort(assumptions)
# The cache key MUST include the strategy.
cache_key = {:simplify, tdd_id, sorted_assumptions, fp_strategy}
case Store.get_op_cache(cache_key) do
{:ok, result_id} ->
result_id
:not_found ->
result_id = do_simplify(tdd_id, sorted_assumptions, MapSet.new(), fp_strategy)
Store.put_op_cache(cache_key, result_id)
result_id
end
end
# The private helper now takes the fixed-point strategy.
defp do_simplify(tdd_id, sorted_assumptions, context, fp_strategy) do
current_state = {tdd_id, sorted_assumptions} # Context is strategy-independent
if MapSet.member?(context, current_state) do
# --- THIS IS THE FIX ---
# Choose the base case based on the requested strategy.
case fp_strategy do
:gfp -> Store.false_node_id() # Greatest Fixed-Point (intersection-like)
:lfp -> Store.true_node_id() # Least Fixed-Point (union-like)
end
else
new_context = MapSet.put(context, current_state)
assumptions = Map.new(sorted_assumptions)
# The rest of the function passes the strategy down in recursive calls.
if Engine.check(assumptions) == :contradiction do
Store.false_node_id()
else
case Store.get_node(tdd_id) do
{:ok, :true_terminal} -> Store.true_node_id()
{:ok, :false_terminal} -> Store.false_node_id()
{:ok, {var, y, n, d}} ->
case Map.get(assumptions, var) do
true ->
do_simplify(y, sorted_assumptions, new_context, fp_strategy)
false ->
do_simplify(n, sorted_assumptions, new_context, fp_strategy)
:dc ->
do_simplify(d, sorted_assumptions, new_context, fp_strategy)
nil ->
assumptions_imply_true = # ... (logic is the same)
assumptions_imply_false = # ... (logic is the same)
cond do
# ...
assumptions_imply_true ->
do_simplify(y, sorted_assumptions, new_context, fp_strategy)
assumptions_imply_false ->
do_simplify(n, sorted_assumptions, new_context, fp_strategy)
true ->
s_y = do_simplify(y, Enum.sort(Map.put(assumptions, var, true)), new_context, fp_strategy)
s_n = do_simplify(n, Enum.sort(Map.put(assumptions, var, false)), new_context, fp_strategy)
s_d = do_simplify(d, Enum.sort(Map.put(assumptions, var, :dc)), new_context, fp_strategy)
Store.find_or_create_node(var, s_y, s_n, s_d)
end
end
end
end
end
end
end
```
**Step 2: Update the Subtyping Check**
Now, we update the test helper `do_is_subtype` to use the correct strategy. The check `A <: B` is equivalent to `A & !B == none`. The type `A` is an LFP type, so we need to use LFP semantics to see if it's a member of the set defined by `!B`.
In `TddCompilerRecursiveTests` and `CompilerAlgoTests`:
```elixir
# In the test modules
defp do_is_subtype(spec1, spec2) do
id1 = Compiler.spec_to_id(spec1)
id2 = Compiler.spec_to_id(spec2)
# A <: B <=> A & ~B == none
# This is equivalent to asking: Is there any element in A that is NOT in B?
intersect_id = Algo.apply(:intersect, &op_intersect_terminals/2, id1, Algo.negate(id2))
# --- THIS IS THE FIX ---
# To prove that the intersection is empty, we must simplify it.
# Since `spec1` (the type we are checking) is an LFP type, we should
# simplify the intersection using LFP semantics. This will correctly
# "unfold" the type and check for membership in the negated type.
final_id = Algo.simplify(intersect_id, %{}, :lfp)
final_id == Store.false_node_id()
end
```
By explicitly telling `simplify` to use `:lfp` semantics for the subtyping check, you are instructing it to use the `Store.true_node_id()` base case, which correctly handles the "productive" recursion of `list_of` types. This will resolve both remaining test failures.

852
lib/debug.ex Normal file
View File

@ -0,0 +1,852 @@
defmodule Tdd.Debug.TracerData do
@moduledoc """
Provides functions to sanitize Elixir terms for tracing and JSON serialization.
"""
@max_sanitize_depth 5
def sanitize_values(list) when is_list(list) do
Enum.map(list, &sanitize_value/1)
end
def sanitize_value(val) do
do_sanitize(val, @max_sanitize_depth)
end
defp do_sanitize(val, _depth_limit)
when is_integer(val) or is_float(val) or is_atom(val) or is_boolean(val),
do: val
# Assume strings are fine
defp do_sanitize(val, _depth_limit) when is_binary(val), do: val
defp do_sanitize(nil, _depth_limit), do: nil
defp do_sanitize(val, _depth_limit) when is_pid(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_port(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_reference(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_function(val) do
try do
arity = Function.info(val, :arity) |> elem(1)
case Function.info(val, :name) do
{:name, name} when is_atom(name) and name != :"" ->
"<Function #{Atom.to_string(name)}/#{arity}>"
_ ->
"<Function/#{arity}>"
end
catch
_, _ -> "<Function>"
end
end
defp do_sanitize(val, depth_limit) when is_list(val) do
if depth_limit <= 0 do
"[...trimmed_list...]"
else
Enum.map(val, &do_sanitize(&1, depth_limit - 1))
end
end
defp do_sanitize(val, depth_limit) when is_map(val) do
if depth_limit <= 0 do
"%{...trimmed_map...}"
else
# Regular map
if Map.has_key?(val, :__struct__) do
module = val.__struct__
# Sanitize only exposed fields, not internal :__meta__ or the :__struct__ key itself
data_to_sanitize = val |> Map.delete(:__struct__) |> Map.delete(:__meta__)
sanitized_fields =
data_to_sanitize
|> Enum.map(fn {k, v} -> {k, do_sanitize(v, depth_limit - 1)} end)
|> Enum.into(%{})
%{type: :struct, module: module, fields: sanitized_fields}
else
val
|> Enum.map(fn {k, v} ->
{do_sanitize(k, depth_limit - 1), do_sanitize(v, depth_limit - 1)}
end)
|> Enum.into(%{})
end
end
end
defp do_sanitize(val, depth_limit) when is_tuple(val) do
if depth_limit <= 0 do
"{...trimmed_tuple...}"
else
val |> Tuple.to_list() |> Enum.map(&do_sanitize(&1, depth_limit - 1)) |> List.to_tuple()
end
end
# Fallback for other types (e.g., DateTime, Date, Time, NaiveDateTime)
defp do_sanitize(val, _depth_limit) do
# For common structs that have `to_string` or are Calendar types
cond do
is_struct(val, DateTime) or is_struct(val, Date) or is_struct(val, Time) or
is_struct(val, NaiveDateTime) ->
# Jason can handle these if they are ISO8601 strings
inspect(val)
true ->
# Generic struct or unknown type
if is_struct(val) do
# A simpler representation
"Struct<#{inspect(val.__struct__)}>"
else
# Fallback, converts to string
inspect(val)
end
end
end
def sanitize_error(exception_instance, stacktrace) do
%{
type: :error,
class: Atom.to_string(exception_instance.__struct__),
message: Exception.message(exception_instance),
# Sanitize stacktrace to keep it manageable
stacktrace: Enum.map(stacktrace, &Exception.format_stacktrace_entry/1) |> Enum.take(15)
}
end
end
defmodule Tdd.Debug do
@moduledoc """
Provides macros to wrap `def` and `defp` for function call/return tracing.
Logs arguments and return values using `IO.inspect`.
Builds an in-memory call tree data structure per traced process.
"""
# --- Agent for Tracing State ---
@agent_name Tdd.Debug.StateAgent
def init_agent_if_needed do
case Process.whereis(@agent_name) do
nil -> Agent.start_link(fn -> MapSet.new() end, name: @agent_name)
_pid -> :ok
end
:ok
end
def add_traced_pid(pid) when is_pid(pid) do
init_agent_if_needed()
Agent.update(@agent_name, &MapSet.put(&1, pid))
end
def remove_traced_pid(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil -> :ok
agent_pid -> Agent.cast(agent_pid, fn state -> MapSet.delete(state, pid) end)
end
end
def is_pid_traced?(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil ->
false
agent_pid ->
try do
Agent.get(agent_pid, &MapSet.member?(&1, pid), :infinity)
rescue
# Agent might have died
_e in [Exit, ArgumentError] -> false
end
end
end
# --- Tracing Data Storage ---
@doc """
Initializes the tracing data structures in the process dictionary.
"""
def init_trace_storage do
Process.put(:tdd_debug_call_stack, [])
Process.put(:tdd_debug_session_roots, [])
:ok
end
@doc """
Retrieves the collected trace data (a list of root call tree nodes)
for the current process and clears it from the process dictionary.
Children within each node and the root list itself are reversed to be in chronological order.
"""
def get_and_clear_trace_data do
data = Process.get(:tdd_debug_session_roots, [])
# Reverse children recursively and then reverse the list of roots
processed_data = Enum.map(data, &reverse_children_recursively/1) |> Enum.reverse()
Process.delete(:tdd_debug_session_roots)
Process.delete(:tdd_debug_call_stack)
processed_data
end
defp reverse_children_recursively(node) do
if !node.children or node.children == [] do
node
else
reversed_and_processed_children =
Enum.map(node.children, &reverse_children_recursively/1)
# Children were added prepending, so reverse for chronological
|> Enum.reverse()
%{node | children: reversed_and_processed_children}
end
end
# --- Tracing Control Functions ---
@doc "Enables function call tracing for the current process."
def enable_tracing do
pid_to_trace = self()
add_traced_pid(pid_to_trace)
# Initialize storage for call trees
init_trace_storage()
ref = Process.monitor(pid_to_trace)
Process.spawn(
fn ->
receive do
{:DOWN, ^ref, :process, ^pid_to_trace, _reason} ->
remove_traced_pid(pid_to_trace)
after
# 1 hour safety timeout
3_600_000 ->
remove_traced_pid(pid_to_trace)
end
end,
# Changed from [:monitor] to [] as monitor is implicit with Process.monitor
[:monitor]
)
:ok
end
@doc "Disables function call tracing for the current process."
def disable_tracing do
remove_traced_pid(self())
# Note: Does not clear trace data, get_and_clear_trace_data() does that.
:ok
end
@doc """
Runs the given 0-arity function with tracing enabled.
Returns a tuple: `{{:ok, result} | {:error, {exception, stacktrace}}, trace_data}`.
Trace data is a list of call tree root nodes.
"""
def run(fun) when is_function(fun, 0) do
enable_tracing()
outcome =
try do
{:ok, fun.()}
rescue
kind -> {:error, {kind, __STACKTRACE__}}
end
trace_data = get_and_clear_trace_data()
disable_tracing()
# trace_data
# |> IO.inspect()
binary = JSON.encode!(trace_data)
File.write("trace.json", binary)
{outcome, trace_data}
end
# --- Process Dictionary for Call Depth (used for printing indent) ---
defp get_depth, do: Process.get(:tdd_debug_depth, 0)
def increment_depth do
new_depth = get_depth() + 1
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
def decrement_depth do
new_depth = max(0, get_depth() - 1)
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
# --- Core Macro Logic ---
defmacro __using__(_opts) do
quote do
import Kernel, except: [def: 1, def: 2, defp: 1, defp: 2]
# Import this module's public functions/macros
import Tdd.Debug
# Ensure this module is compiled for macros
require Tdd.Debug
end
end
@doc false
defmacro def(call, clauses \\ Keyword.new()) do
generate_traced_function(:def, call, clauses, __CALLER__)
end
@doc false
defmacro defp(call, clauses \\ Keyword.new()) do
generate_traced_function(:defp, call, clauses, __CALLER__)
end
defp is_simple_variable_ast?(ast_node) do
case ast_node do
{var_name, _meta, context_module}
when is_atom(var_name) and (context_module == nil or is_atom(context_module)) ->
# Ensure it's not the underscore atom itself
var_name != :_
_ ->
false
end
end
defp generate_traced_function(type, call_ast, clauses, caller_env) do
require Macro
{function_name_ast, meta_call, original_args_patterns_ast_nullable} = call_ast
original_args_patterns_ast_list = original_args_patterns_ast_nullable || []
original_body_ast =
if Keyword.keyword?(clauses) do
Keyword.get(clauses, :do, clauses)
else
clauses
end || quote(do: nil)
# --- Argument Handling for Logging (Existing Logic) ---
mapped_and_generated_vars_tuples =
Enum.map(Enum.with_index(original_args_patterns_ast_list), fn {original_pattern_ast, index} ->
td_arg_var = Macro.var(String.to_atom("__td_arg_#{index}__"), nil)
{final_pattern_for_head, rhs_for_td_arg_assignment} =
case original_pattern_ast do
{:_, _, _} = underscore_ast ->
{underscore_ast, quote(do: :__td_ignored_argument__)}
{:=, _meta_assign, [lhs_of_assign, _rhs_of_assign]} = assignment_pattern_ast ->
if is_simple_variable_ast?(lhs_of_assign) do
{assignment_pattern_ast, lhs_of_assign}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_assign_#{index}"), Elixir)
new_head_pattern =
quote do
unquote(captured_val_var) = unquote(assignment_pattern_ast)
end
{new_head_pattern, captured_val_var}
end
{:\\, _meta_default, [pattern_before_default, _default_value_ast]} =
default_arg_pattern_ast ->
cond do
is_simple_variable_ast?(pattern_before_default) ->
{default_arg_pattern_ast, pattern_before_default}
match?({:=, _, [lhs_inner_assign, _]}, pattern_before_default) and
is_simple_variable_ast?(pattern_before_default |> elem(2) |> Enum.at(0)) ->
{:=, _, [lhs_inner_assign, _]} = pattern_before_default
{default_arg_pattern_ast, lhs_inner_assign}
true ->
captured_val_var = Macro.unique_var(String.to_atom("tdc_def_#{index}"), Elixir)
new_head_pattern =
quote do
unquote(captured_val_var) = unquote(default_arg_pattern_ast)
end
{new_head_pattern, captured_val_var}
end
ast_node ->
if is_simple_variable_ast?(ast_node) do
{ast_node, ast_node}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_pat_#{index}"), Elixir)
new_head_pattern =
quote do
unquote(captured_val_var) = unquote(ast_node)
end
{new_head_pattern, captured_val_var}
end
end
assignment_ast =
quote do
unquote(td_arg_var) = unquote(rhs_for_td_arg_assignment)
end
{final_pattern_for_head, assignment_ast, td_arg_var}
end)
{new_args_patterns_for_head_list, assignments_for_logging_vars_ast_list,
generated_vars_to_log_asts} =
if mapped_and_generated_vars_tuples == [] do
{[], [], []}
else
# Transpose list of 3-element tuples into 3 lists
list_of_lists = Enum.map(mapped_and_generated_vars_tuples, &Tuple.to_list(&1))
[
Enum.map(list_of_lists, &List.first(&1)),
Enum.map(list_of_lists, &Enum.at(&1, 1)),
Enum.map(list_of_lists, &Enum.at(&1, 2))
]
|> then(fn [a, b, c] -> {a, b, c} end)
end
new_call_ast = {function_name_ast, meta_call, new_args_patterns_for_head_list}
# --- Variables for Runtime Info ---
# These vars will hold string versions of module/function name and arity at runtime
# Hygienic vars to avoid collisions.
module_name_runtime_var = Macro.var(:__td_module_name__, __MODULE__)
printable_fn_name_runtime_var = Macro.var(:__td_printable_fn_name__, __MODULE__)
arity_runtime_var = Macro.var(:__td_arity__, __MODULE__)
# Arity calculation at macro expansion time
arity_value = length(original_args_patterns_ast_list)
traced_body_inner_ast =
quote do
# --- Resolve Module/Function/Arity Info at Runtime ---
unquote(module_name_runtime_var) = Atom.to_string(unquote(caller_env.module))
unquote(arity_runtime_var) = unquote(arity_value)
# Resolve var if func name is from a var
runtime_resolved_fn_name_val = unquote(function_name_ast)
unquote(printable_fn_name_runtime_var) =
if is_atom(runtime_resolved_fn_name_val) do
Atom.to_string(runtime_resolved_fn_name_val)
else
# For unquote(var) or operators
Macro.to_string(runtime_resolved_fn_name_val)
end
# --- Main Tracing Logic ---
if Tdd.Debug.is_pid_traced?(self()) do
# --- On Entry: Prepare Data & Log ---
# Assign __td_arg_N__ vars
unquote_splicing(assignments_for_logging_vars_ast_list)
runtime_arg_values_for_node_and_log = [unquote_splicing(generated_vars_to_log_asts)]
# sanitized_args_for_node = Tdd.Debug.TracerData.sanitize_values(runtime_arg_values_for_node_and_log)
sanitized_args_for_node = inspect(runtime_arg_values_for_node_and_log)
# For print indent & node depth
current_call_print_depth = Tdd.Debug.increment_depth()
# Create placeholder node, push to call_stack
new_node_details = %{
id: System.unique_integer([:positive, :monotonic]),
# module: unquote(module_name_runtime_var),
# function: unquote(printable_fn_name_runtime_var),
function:
"#{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}",
# arity: unquote(arity_runtime_var),
args: sanitized_args_for_node,
depth: current_call_print_depth,
# timestamp_enter_monotonic: System.monotonic_time(),
# Will be populated in reverse chronological order
children: [],
# Placeholder
result: :__td_not_returned_yet__
}
Process.put(:tdd_debug_call_stack, [
new_node_details | Process.get(:tdd_debug_call_stack, [])
])
# Logging Call (existing logic)
indent = String.duplicate(" ", current_call_print_depth - 1)
IO.puts(
"#{indent}CALL: #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}"
)
IO.puts("#{indent} ARGS: #{inspect(runtime_arg_values_for_node_and_log)}")
try do
# --- Execute Original Body ---
result_val = unquote(original_body_ast)
# --- On Normal Exit: Finalize Node & Log ---
[completed_node_placeholder | parent_stack_nodes] = Process.get(:tdd_debug_call_stack)
ts_exit_monotonic = System.monotonic_time()
# duration_ms = System.convert_time_unit(ts_exit_monotonic - completed_node_placeholder.timestamp_enter_monotonic, :native, :millisecond)
sanitized_result_val = Tdd.Debug.TracerData.sanitize_value(result_val)
finalized_node =
completed_node_placeholder
# |> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic)
# |> Map.put(:duration_ms, duration_ms)
|> Map.put(:result, %{type: :ok, value: sanitized_result_val})
# For print indent
_ = Tdd.Debug.decrement_depth()
# Update call stack: Add finalized_node to parent's children or session_roots
new_call_stack_after_success =
case parent_stack_nodes do
[parent_on_stack | grand_parent_stack_nodes] ->
updated_parent = %{
parent_on_stack
| children: [finalized_node | parent_on_stack.children]
}
[updated_parent | grand_parent_stack_nodes]
# This was a root call
[] ->
Process.put(:tdd_debug_session_roots, [
finalized_node | Process.get(:tdd_debug_session_roots, [])
])
# New call_stack is empty for this branch
[]
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_success)
# Logging Return
IO.puts(
"#{indent}RETURN from #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(result_val)}"
)
# Return actual result
result_val
rescue
exception_class_err ->
# --- On Error Exit: Finalize Node & Log ---
error_instance_err = exception_class_err
stacktrace_err = __STACKTRACE__
[erroring_node_placeholder | parent_stack_nodes_err] =
Process.get(:tdd_debug_call_stack)
ts_exit_monotonic_err = System.monotonic_time()
duration_ms_err =
System.convert_time_unit(
ts_exit_monotonic_err - erroring_node_placeholder.timestamp_enter_monotonic,
:native,
:millisecond
)
sanitized_error_info =
Tdd.Debug.TracerData.sanitize_error(error_instance_err, stacktrace_err)
finalized_error_node =
erroring_node_placeholder
|> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic_err)
|> Map.put(:duration_ms, duration_ms_err)
|> Map.put(:result, sanitized_error_info)
# For print indent
_ = Tdd.Debug.decrement_depth()
# Update call stack: Add finalized_error_node to parent's children or session_roots
new_call_stack_after_error =
case parent_stack_nodes_err do
[parent_on_stack_err | grand_parent_stack_nodes_err] ->
updated_parent_err = %{
parent_on_stack_err
| children: [finalized_error_node | parent_on_stack_err.children]
}
[updated_parent_err | grand_parent_stack_nodes_err]
# This was a root call that errored
[] ->
Process.put(:tdd_debug_session_roots, [
finalized_error_node | Process.get(:tdd_debug_session_roots, [])
])
# New call_stack is empty for this branch
[]
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_error)
# Logging Error
IO.puts(
"#{indent}ERROR in #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(error_instance_err)}"
)
# Reraise the original error
reraise error_instance_err, stacktrace_err
end
else
# --- Not Traced: Execute Original Body Directly ---
unquote(original_body_ast)
end
end
# --- Final Definition ---
final_definition_ast =
quote location: :keep do
Kernel.unquote(type)(
unquote(new_call_ast),
do: unquote(traced_body_inner_ast)
)
end
final_definition_ast
end
# --- TDD Graph Printing (Kept as it was, assuming it's for a different purpose) ---
@doc "Prints a formatted representation of a TDD graph structure."
def print_tdd_graph(id, label \\ "") do
IO.puts("---#{label} TDD Graph (ID: #{id}) ---")
do_print_tdd_node(id, 0, MapSet.new(), Tdd.Store)
IO.puts("------------------------")
end
defp do_print_tdd_node(id, indent_level, visited, store_module) do
prefix = String.duplicate(" ", indent_level)
if MapSet.member?(visited, id) do
IO.puts("#{prefix}ID #{id} -> [Seen, recursive link]")
:ok
else
new_visited = MapSet.put(visited, id)
# Assumes store_module.get_node/1 exists
case store_module.get_node(id) do
{:ok, :true_terminal} ->
IO.puts("#{prefix}ID #{id} -> TRUE")
{:ok, :false_terminal} ->
IO.puts("#{prefix}ID #{id} -> FALSE")
{:ok, {var, y_id, n_id, dc_id}} ->
IO.puts("#{prefix}ID #{id}: IF #{inspect(var)}")
case var do
{_, :c_head, nested_id, _} when nested_id != nil ->
print_tdd_graph(nested_id, "Var in #{id}")
_ ->
""
end
IO.puts("#{prefix} ├─ Yes (to ID #{y_id}):")
do_print_tdd_node(y_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} ├─ No (to ID #{n_id}):")
do_print_tdd_node(n_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} └─ DC (to ID #{dc_id}):")
do_print_tdd_node(dc_id, indent_level + 2, new_visited, store_module)
{:error, reason} ->
IO.puts("#{prefix}ID #{id}: ERROR - #{reason}")
end
end
end
@doc "Logs a value with a label, using IO.inspect."
def log(value, label, opts \\ []) do
# You can add a flag here to disable all logging globally
if true do
# Default options for better visibility
default_opts = [width: 120, pretty: true]
final_opts = Keyword.merge(default_opts, opts)
IO.inspect(value, [{:label, "[DEBUG] #{label}"} | final_opts])
end
# Return the original value to allow piping
value
end
@doc "Gets and increments a depth counter for tracing recursion."
defp get_depth() do
depth = Process.get(:debug_depth, 0)
Process.put(:debug_depth, depth + 1)
String.duplicate(" ", depth)
end
@doc "Decrements the depth counter."
defp dec_depth() do
depth = Process.get(:debug_depth, 1) |> Kernel.-(1) |> max(0)
Process.put(:debug_depth, depth)
end
@doc "Logs a message with indentation for tracing recursion."
def log_entry(label) do
prefix = get_depth()
IO.inspect(prefix, label: "PREFIX")
IO.puts("#{prefix}[DEBUG] >> #{label}")
end
@doc "Logs a return value with indentation."
def log_exit(value, label) do
dec_depth()
prefix = String.duplicate(" ", Process.get(:debug_depth, 0))
IO.inspect(value, label: prefix <> "[DEBUG] << #{label}")
# Return the value
value
end
@doc """
Builds a map representation of the TDD starting from a root ID.
This is the main entry point. It initializes the process with an empty set
of visited nodes.
"""
def build_graph_map(root_id, label) do
# The result of the recursive build is a tuple {map_data, visited_set}.
# We only need the map_data, so we extract it with elem().
asd =
elem(do_build_node_map(root_id, MapSet.new(), Tdd.Store), 0)
|> IO.inspect(label: label)
end
def to_json(kv_list) do
kv_list
# Jason.encode!(kv_list, pretty: true)
end
@doc false
# This private helper does the actual recursive work.
# It returns a tuple: {node_map, visited_set} so the set of visited
# nodes can be tracked through the recursion.
defp do_build_node_map(0, visited, _), do: {false, visited}
defp do_build_node_map(1, visited, _), do: {true, visited}
defp do_build_node_map(id, visited, store_module) do
if MapSet.member?(visited, id) do
# A cycle or a previously processed node. Return a reference.
{[id: id, ref: "recursive_or_seen"], visited}
else
# Add the current ID to the visited set before any recursion.
new_visited = MapSet.put(visited, id)
case store_module.get_node(id) do
{:ok, :true_terminal} ->
{[id: id, value: true], new_visited}
{:ok, :false_terminal} ->
{[id: id, value: false], new_visited}
{:ok, {var, y_id, n_id, dc_id}} ->
# First, process the variable. This is important because the variable
# itself might contain a sub-TDD that adds nodes to our visited set.
{var_map, visited_after_var} = format_variable(var, new_visited, store_module)
# Now, recursively build the children, threading the visited set through each call.
{yes_map, visited_after_yes} = do_build_node_map(y_id, visited_after_var, store_module)
{no_map, visited_after_no} = do_build_node_map(n_id, visited_after_yes, store_module)
{dc_map, final_visited} = do_build_node_map(dc_id, visited_after_no, store_module)
node = [
id: id,
# The (potentially expanded) variable map
variable: var_map,
yes: yes_map,
no: no_map,
dont_care: dc_map
]
{node, final_visited}
{:error, reason} ->
{[id: id, type: "error", reason: reason], new_visited}
end
end
|> case do
{node, v} ->
n = Enum.sort_by(node, fn {k, _} -> Enum.find_index([:id, :value], &(&1 == k)) end)
# {Jason.OrderedObject.new(n), v}
{n, v}
end
end
@doc false
# This helper now takes the visited set and store_module to handle nested TDDs.
# It returns a tuple: {variable_map, updated_visited_set}.
defp format_variable(var, visited, store_module) do
case var do
# --- Variables with nested TDDs ---
{4, :b_element, index, sub_tdd_id} ->
# Recursively build the map for the sub-TDD.
{sub_tdd_map, new_visited} = do_build_node_map(sub_tdd_id, visited, store_module)
var_map = [
type: "tuple_element_predicate",
index: index,
predicate_tdd: sub_tdd_map
]
{var_map, new_visited}
{5, :c_head, sub_tdd_id, nil} ->
{sub_tdd_map, new_visited} = do_build_node_map(sub_tdd_id, visited, store_module)
var_map = [
type: "list_head_predicate",
predicate_tdd: sub_tdd_map
]
{var_map, new_visited}
{5, :d_tail, sub_tdd_id, nil} ->
{sub_tdd_map, new_visited} = do_build_node_map(sub_tdd_id, visited, store_module)
var_map = [
type: "list_tail_predicate",
predicate_tdd: sub_tdd_map
]
{var_map, new_visited}
# --- Simple variables (no nested TDDs) ---
{0, :is_atom, _, _} ->
{[type: "is_atom"], visited}
{0, :is_integer, _, _} ->
{[type: "is_integer"], visited}
{0, :is_list, _, _} ->
{[type: "is_list"], visited}
{0, :is_tuple, _, _} ->
{[type: "is_tuple"], visited}
{1, :value, atom_val, _} ->
{[type: "atom_equals", value: atom_val], visited}
{2, :alt, n, _} ->
{[type: "integer_less_than", value: n], visited}
{2, :beq, n, _} ->
{[type: "integer_equals", value: n], visited}
{2, :cgt, n, _} ->
{[type: "integer_greater_than", value: n], visited}
{4, :a_size, size, _} ->
{[type: "tuple_size_equals", value: size], visited}
{5, :b_is_empty, _, _} ->
{[type: "list_is_empty"], visited}
# --- Fallback for any other format ---
_ ->
{[type: "unknown", value: inspect(var)], visited}
end
|> case do
{node, v} ->
n = Enum.sort_by(node, fn {k, _} -> Enum.find_index([:id, :value], &(&1 == k)) end)
# {Jason.OrderedObject.new(n), v}
{n, v}
end
end
end

2235
lib/til.ex

File diff suppressed because it is too large Load Diff

View File

@ -193,6 +193,7 @@ defmodule Til.Parser do
case parse_atom_datum(source, state, parent_id) do
{:ok, node_id, rest, new_state} ->
{:ok, node_id, rest, new_state}
{:error, :not_atom} ->
# Failed to parse as a specific atom (e.g. ":foo").
# It could be a symbol that starts with ':' (e.g. if we allow ":" as a symbol).
@ -200,9 +201,16 @@ defmodule Til.Parser do
case parse_symbol_datum(source, state, parent_id) do
{:ok, node_id, rest, new_state} ->
{:ok, node_id, rest, new_state}
{:error, :not_symbol} ->
# If it started with ':' but wasn't a valid atom and also not a valid symbol
create_error_node_and_advance(source, state, parent_id, 1, "Unknown token starting with ':'")
create_error_node_and_advance(
source,
state,
parent_id,
1,
"Unknown token starting with ':'"
)
end
end
@ -212,18 +220,24 @@ defmodule Til.Parser do
case parse_integer_datum(source, state, parent_id) do
{:ok, node_id, rest, new_state} ->
{:ok, node_id, rest, new_state}
{:error, :not_integer} ->
# Not an integer, try parsing as a symbol
case parse_symbol_datum(source, state, parent_id) do
{:ok, node_id, rest, new_state} ->
{:ok, node_id, rest, new_state}
{:error, :not_symbol} ->
# Not a symbol either. Consume 1 char for the unknown token.
create_error_node_and_advance(source, state, parent_id, 1, "Unknown token")
end
end
end # end inner cond
end # end outer cond
end
# end inner cond
end
# end outer cond
end
# --- Datum Parsing Helpers --- (parse_string_datum, process_string_content)
@ -283,7 +297,7 @@ defmodule Til.Parser do
raw_token = opening_tick <> content_segment <> closing_tick
rest_of_source =
String.slice(source_after_opening_tick, (idx_closing_tick_in_segment + 1)..-1)
String.slice(source_after_opening_tick, (idx_closing_tick_in_segment + 1)..-1//1)
state_at_node_end = advance_pos(initial_state_for_token, raw_token)
@ -339,7 +353,7 @@ defmodule Til.Parser do
|> String.length()
spaces_to_remove = min(current_leading_spaces_count, strip_indent)
String.slice(line, spaces_to_remove..-1)
String.slice(line, spaces_to_remove..-1//1)
end)
all_processed_lines = [first_line | processed_rest_lines]
@ -356,9 +370,10 @@ defmodule Til.Parser do
# The colon itself is part of the atom's raw string.
# The `atom_name_part` is what comes after the colon.
case Regex.run(~r/^:([^\s\(\)\[\]\{\}]+)/, source) do
[raw_atom_str, atom_name_part] -> # raw_atom_str is like ":foo", atom_name_part is "foo"
# raw_atom_str is like ":foo", atom_name_part is "foo"
[raw_atom_str, atom_name_part] ->
# The regex [^...]+ ensures atom_name_part is not empty.
rest_after_atom = String.slice(source, String.length(raw_atom_str)..-1)
rest_after_atom = String.slice(source, String.length(raw_atom_str)..-1//1)
start_offset = state.offset
start_line = state.line
start_col = state.col
@ -387,9 +402,11 @@ defmodule Til.Parser do
line: end_line,
col: end_col
}
{:ok, new_node_id, rest_after_atom, final_state}
_ -> # No match (nil or list that doesn't conform, e.g., just ":" or ": followed by space/delimiter")
# No match (nil or list that doesn't conform, e.g., just ":" or ": followed by space/delimiter")
_ ->
{:error, :not_atom}
end
end
@ -426,7 +443,7 @@ defmodule Til.Parser do
# Regex excludes common delimiters. `m{` is handled before symbol parsing.
case Regex.run(~r/^([^\s\(\)\[\]\{\}]+)/, source) do
[raw_symbol | _] ->
rest_after_symbol = String.slice(source, String.length(raw_symbol)..-1)
rest_after_symbol = String.slice(source, String.length(raw_symbol)..-1//1)
start_offset = state.offset
start_line = state.line
start_col = state.col
@ -492,17 +509,18 @@ defmodule Til.Parser do
defp parse_s_expression(original_source_string, source, state, parent_id) do
# Standard S-expression parsing via parse_collection
result = parse_collection(
original_source_string,
source,
state,
parent_id,
"(",
")",
:s_expression,
"Unclosed S-expression",
"Error parsing element in S-expression. Content might be incomplete."
)
result =
parse_collection(
original_source_string,
source,
state,
parent_id,
"(",
")",
:s_expression,
"Unclosed S-expression",
"Error parsing element in S-expression. Content might be incomplete."
)
# After parsing, check if it's an 'fn' expression
case result do
@ -515,8 +533,7 @@ defmodule Til.Parser do
final_state = %{
state_after_collection
| nodes:
Map.put(state_after_collection.nodes, transformed_node.id, transformed_node)
| nodes: Map.put(state_after_collection.nodes, transformed_node.id, transformed_node)
}
{:ok, transformed_node.id, rest_after_collection, final_state}
@ -548,7 +565,8 @@ defmodule Til.Parser do
# into a :lambda_expression node.
defp transform_to_lambda_expression(s_expr_node, nodes_map) do
# s_expr_node.children = [fn_symbol_id, params_s_expr_id, body_form1_id, ...]
_fn_symbol_id = Enum.at(s_expr_node.children, 0) # Already checked
# Already checked
_fn_symbol_id = Enum.at(s_expr_node.children, 0)
if length(s_expr_node.children) < 2 do
%{s_expr_node | parsing_error: "Malformed 'fn' expression: missing parameters list."}
@ -557,7 +575,11 @@ defmodule Til.Parser do
params_s_expr_node = Map.get(nodes_map, params_s_expr_id)
if !(params_s_expr_node && params_s_expr_node.ast_node_type == :s_expression) do
Map.put(s_expr_node, :parsing_error, "Malformed 'fn' expression: parameters list is not an S-expression.")
Map.put(
s_expr_node,
:parsing_error,
"Malformed 'fn' expression: parameters list is not an S-expression."
)
else
# Children of the parameters S-expression, e.g. for (fn ((a integer) (b atom) atom) ...),
# param_s_expr_children_ids would be IDs of [(a integer), (b atom), atom]
@ -579,33 +601,50 @@ defmodule Til.Parser do
all_arg_specs_valid =
Enum.all?(arg_spec_node_ids, fn arg_id ->
arg_node = Map.get(nodes_map, arg_id)
case arg_node do
%{ast_node_type: :symbol} -> true # e.g. x
%{ast_node_type: :s_expression, children: s_children} -> # e.g. (x integer)
# e.g. x
%{ast_node_type: :symbol} ->
true
# e.g. (x integer)
%{ast_node_type: :s_expression, children: s_children} ->
if length(s_children) == 2 do
param_sym_node = Map.get(nodes_map, hd(s_children))
type_spec_node = Map.get(nodes_map, hd(tl(s_children)))
param_sym_node && param_sym_node.ast_node_type == :symbol &&
type_spec_node && (type_spec_node.ast_node_type == :symbol || type_spec_node.ast_node_type == :s_expression)
type_spec_node &&
(type_spec_node.ast_node_type == :symbol ||
type_spec_node.ast_node_type == :s_expression)
else
false # Not a valid (param_symbol type_spec) structure
# Not a valid (param_symbol type_spec) structure
false
end
_ -> false # Not a symbol or valid S-expression for arg spec
# Not a symbol or valid S-expression for arg spec
_ ->
false
end
end)
# Validate return_type_spec_node_id: must be nil or a valid type specifier node
return_type_spec_valid =
if is_nil(return_type_spec_node_id) do
true # Inferred return type is valid
# Inferred return type is valid
true
else
ret_type_node = Map.get(nodes_map, return_type_spec_node_id)
ret_type_node && (ret_type_node.ast_node_type == :symbol || ret_type_node.ast_node_type == :s_expression)
ret_type_node &&
(ret_type_node.ast_node_type == :symbol ||
ret_type_node.ast_node_type == :s_expression)
end
if all_arg_specs_valid && return_type_spec_valid do
body_node_ids = Enum.drop(s_expr_node.children, 2) # Body starts after 'fn' and params_s_expr
# Body starts after 'fn' and params_s_expr
body_node_ids = Enum.drop(s_expr_node.children, 2)
Map.merge(s_expr_node, %{
:ast_node_type => :lambda_expression,
:params_s_expr_id => params_s_expr_id,
@ -617,10 +656,17 @@ defmodule Til.Parser do
# Determine more specific error message
error_message =
cond do
!all_arg_specs_valid -> "Malformed 'fn' expression: invalid argument specification(s)."
!return_type_spec_valid -> "Malformed 'fn' expression: invalid return type specification."
true -> "Malformed 'fn' expression." # Generic fallback
!all_arg_specs_valid ->
"Malformed 'fn' expression: invalid argument specification(s)."
!return_type_spec_valid ->
"Malformed 'fn' expression: invalid return type specification."
# Generic fallback
true ->
"Malformed 'fn' expression."
end
Map.put(s_expr_node, :parsing_error, error_message)
end
end
@ -931,7 +977,7 @@ defmodule Til.Parser do
[ws | _] = whitespace_match
new_offset = o + String.length(ws)
{new_line, new_col} = calculate_new_line_col(ws, l, c)
remaining_source = String.slice(source, String.length(ws)..-1)
remaining_source = String.slice(source, String.length(ws)..-1//1)
{:ok, remaining_source, %{state | offset: new_offset, line: new_line, col: new_col}}
else
if String.length(source) == 0 do

File diff suppressed because it is too large Load Diff

View File

@ -1,146 +0,0 @@
defmodule Tilly.BDD do
@moduledoc """
Manages the BDD store, including hash-consing of BDD nodes.
The BDD store is expected to be part of a `typing_ctx` map under the key `:bdd_store`.
"""
alias Tilly.BDD.Node
@false_node_id 0
@true_node_id 1
@initial_next_node_id 2
@universal_ops_module :universal_ops
@doc """
Initializes the BDD store within the typing context.
Pre-interns canonical `false` and `true` BDD nodes.
"""
def init_bdd_store(typing_ctx) when is_map(typing_ctx) do
false_structure = Node.mk_false()
true_structure = Node.mk_true()
bdd_store = %{
nodes_by_structure: %{
{false_structure, @universal_ops_module} => @false_node_id,
{true_structure, @universal_ops_module} => @true_node_id
},
structures_by_id: %{
@false_node_id => %{structure: false_structure, ops_module: @universal_ops_module},
@true_node_id => %{structure: true_structure, ops_module: @universal_ops_module}
},
next_node_id: @initial_next_node_id,
ops_cache: %{} # Cache for BDD operations {op_key, id1, id2} -> result_id
}
Map.put(typing_ctx, :bdd_store, bdd_store)
end
@doc """
Gets an existing BDD node ID or interns a new one if it's not already in the store.
Returns a tuple `{new_typing_ctx, node_id}`.
The `typing_ctx` is updated if a new node is interned.
"""
def get_or_intern_node(typing_ctx, logical_structure, ops_module_atom) do
bdd_store = Map.get(typing_ctx, :bdd_store)
unless bdd_store do
raise ArgumentError, "BDD store not initialized in typing_ctx. Call init_bdd_store first."
end
key = {logical_structure, ops_module_atom}
case Map.get(bdd_store.nodes_by_structure, key) do
nil ->
# Node not found, intern it
node_id = bdd_store.next_node_id
new_nodes_by_structure = Map.put(bdd_store.nodes_by_structure, key, node_id)
node_data = %{structure: logical_structure, ops_module: ops_module_atom}
new_structures_by_id = Map.put(bdd_store.structures_by_id, node_id, node_data)
new_next_node_id = node_id + 1
new_bdd_store =
%{
bdd_store
| nodes_by_structure: new_nodes_by_structure,
structures_by_id: new_structures_by_id,
next_node_id: new_next_node_id
}
new_typing_ctx = Map.put(typing_ctx, :bdd_store, new_bdd_store)
{new_typing_ctx, node_id}
existing_node_id ->
# Node found
{typing_ctx, existing_node_id}
end
end
@doc """
Retrieves the node's structure and ops_module from the BDD store.
Returns `%{structure: logical_structure_tuple, ops_module: ops_module_atom}` or `nil` if not found.
"""
def get_node_data(typing_ctx, node_id) do
with %{bdd_store: %{structures_by_id: structures_by_id}} <- typing_ctx,
data when not is_nil(data) <- Map.get(structures_by_id, node_id) do
data
else
_ -> nil
end
end
@doc """
Checks if the given node ID corresponds to the canonical `false` BDD node.
"""
def is_false_node?(typing_ctx, node_id) do
# Optimized check for the predefined ID
if node_id == @false_node_id do
true
else
# Fallback for cases where a node might be structurally false but not have the canonical ID.
# This should ideally not happen with proper interning of Node.mk_false() via get_or_intern_node.
case get_node_data(typing_ctx, node_id) do
%{structure: structure, ops_module: @universal_ops_module} ->
structure == Node.mk_false()
_ ->
false
end
end
end
@doc """
Checks if the given node ID corresponds to the canonical `true` BDD node.
"""
def is_true_node?(typing_ctx, node_id) do
# Optimized check for the predefined ID
if node_id == @true_node_id do
true
else
# Fallback for cases where a node might be structurally true but not have the canonical ID.
case get_node_data(typing_ctx, node_id) do
%{structure: structure, ops_module: @universal_ops_module} ->
structure == Node.mk_true()
_ ->
false
end
end
end
@doc """
Returns the canonical ID for the `false` BDD node.
"""
def false_node_id(), do: @false_node_id
@doc """
Returns the canonical ID for the `true` BDD node.
"""
def true_node_id(), do: @true_node_id
@doc """
Returns the atom used as the `ops_module` for universal nodes like `true` and `false`.
"""
def universal_ops_module(), do: @universal_ops_module
end

View File

@ -1,89 +0,0 @@
defmodule Tilly.BDD.AtomBoolOps do
@moduledoc """
BDD operations module for sets of atoms.
Elements are atoms, and leaf values are booleans.
"""
@doc """
Compares two atoms.
Returns `:lt`, `:eq`, or `:gt`.
"""
def compare_elements(elem1, elem2) when is_atom(elem1) and is_atom(elem2) do
cond do
elem1 < elem2 -> :lt
elem1 > elem2 -> :gt
true -> :eq
end
end
@doc """
Checks if two atoms are equal.
"""
def equal_element?(elem1, elem2) when is_atom(elem1) and is_atom(elem2) do
elem1 == elem2
end
@doc """
Hashes an atom.
"""
def hash_element(elem) when is_atom(elem) do
# erlang.phash2 is suitable for term hashing
:erlang.phash2(elem)
end
@doc """
The leaf value representing an empty set of atoms (false).
"""
def empty_leaf(), do: false
@doc """
The leaf value representing the universal set of atoms (true).
This is used if a BDD simplifies to a state where all atoms of this kind are included.
"""
def any_leaf(), do: true
@doc """
Checks if a leaf value represents an empty set.
"""
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
leaf_val == false
end
@doc """
Computes the union of two leaf values.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 or leaf2
end
@doc """
Computes the intersection of two leaf values.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def intersection_leaves(_typing_ctx, leaf1, leaf2)
when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 and leaf2
end
@doc """
Computes the negation of a leaf value.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
not leaf
end
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
# leaf1 and (not leaf2)
# end
@doc """
Tests a leaf value to determine if it represents an empty, full, or other set.
Returns `:empty`, `:full`, or `:other`.
"""
def test_leaf_value(true), do: :full
def test_leaf_value(false), do: :empty
# Add a clause for other types if atoms could have non-boolean leaf values
# def test_leaf_value(_other), do: :other
end

View File

@ -1,87 +0,0 @@
defmodule Tilly.BDD.IntegerBoolOps do
@moduledoc """
BDD Operations module for BDDs where elements are integers and leaves are booleans.
"""
@doc """
Compares two integer elements.
Returns `:lt`, `:eq`, or `:gt`.
"""
def compare_elements(elem1, elem2) when is_integer(elem1) and is_integer(elem2) do
cond do
elem1 < elem2 -> :lt
elem1 > elem2 -> :gt
true -> :eq
end
end
@doc """
Checks if two integer elements are equal.
"""
def equal_element?(elem1, elem2) when is_integer(elem1) and is_integer(elem2) do
elem1 == elem2
end
@doc """
Hashes an integer element.
"""
def hash_element(elem) when is_integer(elem) do
elem
end
@doc """
Returns the leaf value representing emptiness (false).
"""
def empty_leaf(), do: false
@doc """
Returns the leaf value representing universality (true).
"""
def any_leaf(), do: true
@doc """
Checks if the leaf value represents emptiness.
"""
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
leaf_val == false
end
@doc """
Computes the union of two boolean leaf values.
The `_typing_ctx` is ignored for this simple ops module.
"""
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 or leaf2
end
@doc """
Computes the intersection of two boolean leaf values.
The `_typing_ctx` is ignored for this simple ops module.
"""
def intersection_leaves(_typing_ctx, leaf1, leaf2)
when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 and leaf2
end
@doc """
Computes the negation of a boolean leaf value.
The `_typing_ctx` is ignored for this simple ops module.
"""
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
not leaf
end
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
# leaf1 and (not leaf2)
# end
@doc """
Tests a leaf value to determine if it represents an empty, full, or other set.
For boolean leaves with integers, this mirrors AtomBoolOps and StringBoolOps.
Returns `:empty`, `:full`, or `:other`.
"""
def test_leaf_value(true), do: :full
def test_leaf_value(false), do: :empty
# If integer BDDs could have non-boolean leaves that are not empty/full:
# def test_leaf_value(_other_leaf_value), do: :other
end

View File

@ -1,124 +0,0 @@
defmodule Tilly.BDD.Node do
@moduledoc """
Defines the structure of BDD nodes and provides basic helper functions.
BDD nodes can be one of the following Elixir terms:
- `true`: Represents the universal set BDD.
- `false`: Represents the empty set BDD.
- `{:leaf, leaf_value_id}`: Represents a leaf node.
`leaf_value_id`'s interpretation depends on the specific BDD's `ops_module`.
- `{:split, element_id, positive_child_id, ignore_child_id, negative_child_id}`:
Represents an internal decision node.
`element_id` is the value being split upon.
`positive_child_id`, `ignore_child_id`, `negative_child_id` are IDs of other BDD nodes.
"""
@typedoc "A BDD node representing the universal set."
@type true_node :: true
@typedoc "A BDD node representing the empty set."
@type false_node :: false
@typedoc "A BDD leaf node."
@type leaf_node(leaf_value) :: {:leaf, leaf_value}
@typedoc "A BDD split node."
@type split_node(element, node_id) ::
{:split, element, node_id, node_id, node_id}
@typedoc "Any valid BDD node structure."
@type t(element, leaf_value, node_id) ::
true_node()
| false_node()
| leaf_node(leaf_value)
| split_node(element, node_id)
# --- Smart Constructors (Low-Level) ---
@doc "Creates a true BDD node."
@spec mk_true() :: true_node()
def mk_true, do: true
@doc "Creates a false BDD node."
@spec mk_false() :: false_node()
def mk_false, do: false
@doc "Creates a leaf BDD node."
@spec mk_leaf(leaf_value :: any()) :: leaf_node(any())
def mk_leaf(leaf_value_id), do: {:leaf, leaf_value_id}
@doc "Creates a split BDD node."
@spec mk_split(
element_id :: any(),
positive_child_id :: any(),
ignore_child_id :: any(),
negative_child_id :: any()
) :: split_node(any(), any())
def mk_split(element_id, positive_child_id, ignore_child_id, negative_child_id) do
{:split, element_id, positive_child_id, ignore_child_id, negative_child_id}
end
# --- Predicates ---
@doc "Checks if the node is a true node."
@spec is_true?(node :: t(any(), any(), any())) :: boolean()
def is_true?(true), do: true
def is_true?(_other), do: false
@doc "Checks if the node is a false node."
@spec is_false?(node :: t(any(), any(), any())) :: boolean()
def is_false?(false), do: true
def is_false?(_other), do: false
@doc "Checks if the node is a leaf node."
@spec is_leaf?(node :: t(any(), any(), any())) :: boolean()
def is_leaf?({:leaf, _value}), do: true
def is_leaf?(_other), do: false
@doc "Checks if the node is a split node."
@spec is_split?(node :: t(any(), any(), any())) :: boolean()
def is_split?({:split, _el, _p, _i, _n}), do: true
def is_split?(_other), do: false
# --- Accessors ---
@doc """
Returns the value of a leaf node.
Raises an error if the node is not a leaf node.
"""
@spec value(leaf_node :: leaf_node(any())) :: any()
def value({:leaf, value_id}), do: value_id
def value(other), do: raise(ArgumentError, "Not a leaf node: #{inspect(other)}")
@doc """
Returns the element of a split node.
Raises an error if the node is not a split node.
"""
@spec element(split_node :: split_node(any(), any())) :: any()
def element({:split, element_id, _, _, _}), do: element_id
def element(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
@doc """
Returns the positive child ID of a split node.
Raises an error if the node is not a split node.
"""
@spec positive_child(split_node :: split_node(any(), any())) :: any()
def positive_child({:split, _, p_child_id, _, _}), do: p_child_id
def positive_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
@doc """
Returns the ignore child ID of a split node.
Raises an error if the node is not a split node.
"""
@spec ignore_child(split_node :: split_node(any(), any())) :: any()
def ignore_child({:split, _, _, i_child_id, _}), do: i_child_id
def ignore_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
@doc """
Returns the negative child ID of a split node.
Raises an error if the node is not a split node.
"""
@spec negative_child(split_node :: split_node(any(), any())) :: any()
def negative_child({:split, _, _, _, n_child_id}), do: n_child_id
def negative_child(other), do: raise(ArgumentError, "Not a split node: #{inspect(other)}")
end

View File

@ -1,347 +0,0 @@
defmodule Tilly.BDD.Ops do
@moduledoc """
Generic BDD algorithms and smart constructors.
These functions operate on BDD node IDs and use an `ops_module`
to dispatch to specific element/leaf operations.
"""
alias Tilly.BDD
alias Tilly.BDD.Node
@doc """
Smart constructor for leaf nodes.
Uses the `ops_module` to test if the `leaf_value` corresponds to
an empty or universal set for that module.
Returns `{new_typing_ctx, node_id}`.
"""
def leaf(typing_ctx, leaf_value, ops_module) do
case apply(ops_module, :test_leaf_value, [leaf_value]) do
:empty ->
{typing_ctx, BDD.false_node_id()}
:full ->
{typing_ctx, BDD.true_node_id()}
:other ->
logical_structure = Node.mk_leaf(leaf_value)
BDD.get_or_intern_node(typing_ctx, logical_structure, ops_module)
end
end
@doc """
Smart constructor for split nodes. Applies simplification rules.
Returns `{new_typing_ctx, node_id}`.
"""
def split(typing_ctx, element, p_id, i_id, n_id, ops_module) do
# Apply simplification rules. Order can be important.
cond do
# If ignore and negative children are False, result is positive child.
BDD.is_false_node?(typing_ctx, i_id) and
BDD.is_false_node?(typing_ctx, n_id) ->
{typing_ctx, p_id}
# If ignore child is True, the whole BDD is True.
BDD.is_true_node?(typing_ctx, i_id) ->
{typing_ctx, BDD.true_node_id()}
# If positive and negative children are the same.
p_id == n_id ->
if p_id == i_id do
# All three children are identical.
{typing_ctx, p_id}
else
# Result is p_id (or n_id) unioned with i_id.
# This creates a potential mutual recursion with union_bdds
# which needs to be handled by the apply_op cache.
union_bdds(typing_ctx, p_id, i_id)
end
# TODO: Add more simplification rules from CDuce bdd.ml `split` as needed.
# e.g. if p=T, i=F, n=T -> True
# e.g. if p=F, i=F, n=T -> not(x) relative to this BDD's element universe (complex)
true ->
# No further simplification rule applied, intern the node.
logical_structure = Node.mk_split(element, p_id, i_id, n_id)
BDD.get_or_intern_node(typing_ctx, logical_structure, ops_module)
end
end
@doc """
Computes the union of two BDDs.
Returns `{new_typing_ctx, result_node_id}`.
"""
def union_bdds(typing_ctx, bdd1_id, bdd2_id) do
apply_op(typing_ctx, :union, bdd1_id, bdd2_id)
end
@doc """
Computes the intersection of two BDDs.
Returns `{new_typing_ctx, result_node_id}`.
"""
def intersection_bdds(typing_ctx, bdd1_id, bdd2_id) do
apply_op(typing_ctx, :intersection, bdd1_id, bdd2_id)
end
@doc """
Computes the negation of a BDD.
Returns `{new_typing_ctx, result_node_id}`.
"""
def negation_bdd(typing_ctx, bdd_id) do
# The second argument to apply_op is nil for unary operations like negation.
apply_op(typing_ctx, :negation, bdd_id, nil)
end
@doc """
Computes the difference of two BDDs (bdd1 - bdd2).
Returns `{new_typing_ctx, result_node_id}`.
Implemented as `bdd1 INTERSECTION (NEGATION bdd2)`.
"""
def difference_bdd(typing_ctx, bdd1_id, bdd2_id) do
{ctx, neg_bdd2_id} = negation_bdd(typing_ctx, bdd2_id)
intersection_bdds(ctx, bdd1_id, neg_bdd2_id)
end
# Internal function to handle actual BDD operations, bypassing cache for direct calls.
defp do_union_bdds(typing_ctx, bdd1_id, bdd2_id) do
# Ensure canonical order for commutative operations if not handled by apply_op key
# For simplicity, apply_op will handle canonical key generation.
# 1. Handle terminal cases
cond do
bdd1_id == bdd2_id -> {typing_ctx, bdd1_id}
BDD.is_true_node?(typing_ctx, bdd1_id) -> {typing_ctx, BDD.true_node_id()}
BDD.is_true_node?(typing_ctx, bdd2_id) -> {typing_ctx, BDD.true_node_id()}
BDD.is_false_node?(typing_ctx, bdd1_id) -> {typing_ctx, bdd2_id}
BDD.is_false_node?(typing_ctx, bdd2_id) -> {typing_ctx, bdd1_id}
true -> perform_union(typing_ctx, bdd1_id, bdd2_id)
end
end
defp perform_union(typing_ctx, bdd1_id, bdd2_id) do
%{structure: s1, ops_module: ops_m1} = BDD.get_node_data(typing_ctx, bdd1_id)
%{structure: s2, ops_module: ops_m2} = BDD.get_node_data(typing_ctx, bdd2_id)
# For now, assume ops_modules must match for simplicity.
# Production systems might need more complex logic or type errors here.
if ops_m1 != ops_m2 do
raise ArgumentError,
"Cannot union BDDs with different ops_modules: #{inspect(ops_m1)} and #{inspect(ops_m2)}"
end
ops_m = ops_m1
case {s1, s2} do
# Both are leaves
{{:leaf, v1}, {:leaf, v2}} ->
new_leaf_val = apply(ops_m, :union_leaves, [typing_ctx, v1, v2])
leaf(typing_ctx, new_leaf_val, ops_m)
# s1 is split, s2 is leaf
{{:split, x1, p1_id, i1_id, n1_id}, {:leaf, _v2}} ->
# CDuce: split x1 p1 (i1 ++ b) n1
{ctx, new_i1_id} = union_bdds(typing_ctx, i1_id, bdd2_id)
split(ctx, x1, p1_id, new_i1_id, n1_id, ops_m)
# s1 is leaf, s2 is split
{{:leaf, _v1}, {:split, x2, p2_id, i2_id, n2_id}} ->
# CDuce: split x2 p2 (i2 ++ a) n2 (symmetric to above)
{ctx, new_i2_id} = union_bdds(typing_ctx, i2_id, bdd1_id)
split(ctx, x2, p2_id, new_i2_id, n2_id, ops_m)
# Both are splits
{{:split, x1, p1_id, i1_id, n1_id}, {:split, x2, p2_id, i2_id, n2_id}} ->
# Compare elements using the ops_module
comp_result = apply(ops_m, :compare_elements, [x1, x2])
cond do
comp_result == :eq ->
# Elements are equal, merge children
{ctx0, new_p_id} = union_bdds(typing_ctx, p1_id, p2_id)
{ctx1, new_i_id} = union_bdds(ctx0, i1_id, i2_id)
{ctx2, new_n_id} = union_bdds(ctx1, n1_id, n2_id)
split(ctx2, x1, new_p_id, new_i_id, new_n_id, ops_m)
comp_result == :lt ->
# x1 < x2
# CDuce: split x1 p1 (i1 ++ b) n1
{ctx, new_i1_id} = union_bdds(typing_ctx, i1_id, bdd2_id)
split(ctx, x1, p1_id, new_i1_id, n1_id, ops_m)
comp_result == :gt ->
# x1 > x2
# CDuce: split x2 p2 (i2 ++ a) n2
{ctx, new_i2_id} = union_bdds(typing_ctx, i2_id, bdd1_id)
split(ctx, x2, p2_id, new_i2_id, n2_id, ops_m)
end
end
end
defp do_intersection_bdds(typing_ctx, bdd1_id, bdd2_id) do
# Canonical order handled by apply_op key generation.
# Fast path for disjoint singleton BDDs
case {BDD.get_node_data(typing_ctx, bdd1_id), BDD.get_node_data(typing_ctx, bdd2_id)} do
{%{structure: {:split, x1, t, f, f}, ops_module: m},
%{structure: {:split, x2, t, f, f}, ops_module: m}}
when x1 != x2 ->
{typing_ctx, BDD.false_node_id()}
_ ->
# 1. Handle terminal cases
cond do
bdd1_id == bdd2_id -> {typing_ctx, bdd1_id}
BDD.is_false_node?(typing_ctx, bdd1_id) -> {typing_ctx, BDD.false_node_id()}
BDD.is_false_node?(typing_ctx, bdd2_id) -> {typing_ctx, BDD.false_node_id()}
BDD.is_true_node?(typing_ctx, bdd1_id) -> {typing_ctx, bdd2_id}
BDD.is_true_node?(typing_ctx, bdd2_id) -> {typing_ctx, bdd1_id}
true -> perform_intersection(typing_ctx, bdd1_id, bdd2_id)
end
end
end
defp perform_intersection(typing_ctx, bdd1_id, bdd2_id) do
%{structure: s1, ops_module: ops_m1} = BDD.get_node_data(typing_ctx, bdd1_id)
%{structure: s2, ops_module: ops_m2} = BDD.get_node_data(typing_ctx, bdd2_id)
if ops_m1 != ops_m2 do
raise ArgumentError,
"Cannot intersect BDDs with different ops_modules: #{inspect(ops_m1)} and #{inspect(ops_m2)}"
end
ops_m = ops_m1
case {s1, s2} do
# Both are leaves
{{:leaf, v1}, {:leaf, v2}} ->
new_leaf_val = apply(ops_m, :intersection_leaves, [typing_ctx, v1, v2])
leaf(typing_ctx, new_leaf_val, ops_m)
# s1 is split, s2 is leaf
{{:split, x1, p1_id, i1_id, n1_id}, {:leaf, _v2}} ->
{ctx0, new_p1_id} = intersection_bdds(typing_ctx, p1_id, bdd2_id)
{ctx1, new_i1_id} = intersection_bdds(ctx0, i1_id, bdd2_id)
{ctx2, new_n1_id} = intersection_bdds(ctx1, n1_id, bdd2_id)
split(ctx2, x1, new_p1_id, new_i1_id, new_n1_id, ops_m)
# s1 is leaf, s2 is split
{{:leaf, _v1}, {:split, x2, p2_id, i2_id, n2_id}} ->
{ctx0, new_p2_id} = intersection_bdds(typing_ctx, bdd1_id, p2_id)
{ctx1, new_i2_id} = intersection_bdds(ctx0, bdd1_id, i2_id)
{ctx2, new_n2_id} = intersection_bdds(ctx1, bdd1_id, n2_id)
split(ctx2, x2, new_p2_id, new_i2_id, new_n2_id, ops_m)
# Both are splits
{{:split, x1, p1_id, i1_id, n1_id}, {:split, x2, p2_id, i2_id, n2_id}} ->
comp_result = apply(ops_m, :compare_elements, [x1, x2])
cond do
comp_result == :eq ->
# CDuce: split x1 ((p1**(p2++i2))++(p2**i1)) (i1**i2) ((n1**(n2++i2))++(n2**i1))
{ctx0, p2_u_i2} = union_bdds(typing_ctx, p2_id, i2_id)
{ctx1, n2_u_i2} = union_bdds(ctx0, n2_id, i2_id)
{ctx2, p1_i_p2ui2} = intersection_bdds(ctx1, p1_id, p2_u_i2)
{ctx3, p2_i_i1} = intersection_bdds(ctx2, p2_id, i1_id)
{ctx4, new_p_id} = union_bdds(ctx3, p1_i_p2ui2, p2_i_i1)
{ctx5, new_i_id} = intersection_bdds(ctx4, i1_id, i2_id)
{ctx6, n1_i_n2ui2} = intersection_bdds(ctx5, n1_id, n2_u_i2)
{ctx7, n2_i_i1} = intersection_bdds(ctx6, n2_id, i1_id)
{ctx8, new_n_id} = union_bdds(ctx7, n1_i_n2ui2, n2_i_i1)
split(ctx8, x1, new_p_id, new_i_id, new_n_id, ops_m)
# x1 < x2
comp_result == :lt ->
# CDuce: split x1 (p1 ** b) (i1 ** b) (n1 ** b) where b is bdd2
{ctx0, new_p1_id} = intersection_bdds(typing_ctx, p1_id, bdd2_id)
{ctx1, new_i1_id} = intersection_bdds(ctx0, i1_id, bdd2_id)
{ctx2, new_n1_id} = intersection_bdds(ctx1, n1_id, bdd2_id)
split(ctx2, x1, new_p1_id, new_i1_id, new_n1_id, ops_m)
# x1 > x2
comp_result == :gt ->
# CDuce: split x2 (a ** p2) (a ** i2) (a ** n2) where a is bdd1
{ctx0, new_p2_id} = intersection_bdds(typing_ctx, bdd1_id, p2_id)
{ctx1, new_i2_id} = intersection_bdds(ctx0, bdd1_id, i2_id)
{ctx2, new_n2_id} = intersection_bdds(ctx1, bdd1_id, n2_id)
split(ctx2, x2, new_p2_id, new_i2_id, new_n2_id, ops_m)
end
end
end
defp do_negation_bdd(typing_ctx, bdd_id) do
# 1. Handle terminal cases
cond do
BDD.is_true_node?(typing_ctx, bdd_id) -> {typing_ctx, BDD.false_node_id()}
BDD.is_false_node?(typing_ctx, bdd_id) -> {typing_ctx, BDD.true_node_id()}
true -> perform_negation(typing_ctx, bdd_id)
end
end
defp perform_negation(typing_ctx, bdd_id) do
%{structure: s, ops_module: ops_m} = BDD.get_node_data(typing_ctx, bdd_id)
case s do
# Leaf
{:leaf, v} ->
neg_leaf_val = apply(ops_m, :negation_leaf, [typing_ctx, v])
leaf(typing_ctx, neg_leaf_val, ops_m)
# Split
{:split, x, p_id, i_id, n_id} ->
# CDuce: ~~i ** split x (~~p) (~~(p++n)) (~~n)
{ctx0, neg_i_id} = negation_bdd(typing_ctx, i_id)
{ctx1, neg_p_id} = negation_bdd(ctx0, p_id)
{ctx2, p_u_n_id} = union_bdds(ctx1, p_id, n_id)
{ctx3, neg_p_u_n_id} = negation_bdd(ctx2, p_u_n_id)
{ctx4, neg_n_id} = negation_bdd(ctx3, n_id)
{ctx5, split_part_id} = split(ctx4, x, neg_p_id, neg_p_u_n_id, neg_n_id, ops_m)
intersection_bdds(ctx5, neg_i_id, split_part_id)
end
end
# --- Caching Wrapper for BDD Operations ---
defp apply_op(typing_ctx, op_key, bdd1_id, bdd2_id) do
cache_key = make_cache_key(op_key, bdd1_id, bdd2_id)
bdd_store = Map.get(typing_ctx, :bdd_store)
case Map.get(bdd_store.ops_cache, cache_key) do
nil ->
# Not in cache, compute it
{new_typing_ctx, result_id} =
case op_key do
:union -> do_union_bdds(typing_ctx, bdd1_id, bdd2_id)
:intersection -> do_intersection_bdds(typing_ctx, bdd1_id, bdd2_id)
# bdd2_id is nil here
:negation -> do_negation_bdd(typing_ctx, bdd1_id)
_ -> raise "Unsupported op_key: #{op_key}"
end
# Store in cache
# IMPORTANT: Use new_typing_ctx (from the operation) to get the potentially updated bdd_store
current_bdd_store_after_op = Map.get(new_typing_ctx, :bdd_store)
new_ops_cache = Map.put(current_bdd_store_after_op.ops_cache, cache_key, result_id)
final_bdd_store_with_cache = %{current_bdd_store_after_op | ops_cache: new_ops_cache}
# And put this updated bdd_store back into new_typing_ctx
final_typing_ctx_with_cache =
Map.put(new_typing_ctx, :bdd_store, final_bdd_store_with_cache)
{final_typing_ctx_with_cache, result_id}
cached_result_id ->
{typing_ctx, cached_result_id}
end
end
defp make_cache_key(:negation, bdd_id, nil), do: {:negation, bdd_id}
defp make_cache_key(op_key, id1, id2) when op_key in [:union, :intersection] do
# Canonical order for commutative binary operations
if id1 <= id2, do: {op_key, id1, id2}, else: {op_key, id2, id1}
end
defp make_cache_key(op_key, id1, id2), do: {op_key, id1, id2}
end

View File

@ -1,87 +0,0 @@
defmodule Tilly.BDD.StringBoolOps do
@moduledoc """
BDD operations module for sets of strings.
Elements are strings, and leaf values are booleans.
"""
@doc """
Compares two strings.
Returns `:lt`, `:eq`, or `:gt`.
"""
def compare_elements(elem1, elem2) when is_binary(elem1) and is_binary(elem2) do
cond do
elem1 < elem2 -> :lt
elem1 > elem2 -> :gt
true -> :eq
end
end
@doc """
Checks if two strings are equal.
"""
def equal_element?(elem1, elem2) when is_binary(elem1) and is_binary(elem2) do
elem1 == elem2
end
@doc """
Hashes a string.
"""
def hash_element(elem) when is_binary(elem) do
# erlang.phash2 is suitable for term hashing
:erlang.phash2(elem)
end
@doc """
The leaf value representing an empty set of strings (false).
"""
def empty_leaf(), do: false
@doc """
The leaf value representing the universal set of strings (true).
"""
def any_leaf(), do: true
@doc """
Checks if a leaf value represents an empty set.
"""
def is_empty_leaf?(leaf_val) when is_boolean(leaf_val) do
leaf_val == false
end
@doc """
Computes the union of two leaf values.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def union_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 or leaf2
end
@doc """
Computes the intersection of two leaf values.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def intersection_leaves(_typing_ctx, leaf1, leaf2)
when is_boolean(leaf1) and is_boolean(leaf2) do
leaf1 and leaf2
end
@doc """
Computes the negation of a leaf value.
`typing_ctx` is included for interface consistency, but not used for boolean leaves.
"""
def negation_leaf(_typing_ctx, leaf) when is_boolean(leaf) do
not leaf
end
# def difference_leaves(_typing_ctx, leaf1, leaf2) when is_boolean(leaf1) and is_boolean(leaf2) do
# leaf1 and (not leaf2)
# end
@doc """
Tests a leaf value to determine if it represents an empty, full, or other set.
Returns `:empty`, `:full`, or `:other`.
"""
def test_leaf_value(true), do: :full
def test_leaf_value(false), do: :empty
# def test_leaf_value(_other), do: :other
end

View File

@ -1,57 +0,0 @@
defmodule Tilly.Type do
@moduledoc """
Defines the structure of a Type Descriptor (`Descr`) and provides
helper functions for creating fundamental type descriptors.
A Type Descriptor is a map representing a type. Each field in the map
corresponds to a basic kind of type component (e.g., atoms, integers, pairs)
and holds a BDD node ID. These BDDs represent the set of values
allowed for that particular component of the type.
"""
alias Tilly.BDD
@doc """
Returns a `Descr` map representing the empty type (Nothing).
All BDD IDs in this `Descr` point to the canonical `false` BDD node.
The `typing_ctx` is passed for consistency but not modified by this function.
"""
def empty_descr(_typing_ctx) do
false_id = BDD.false_node_id()
%{
atoms_bdd_id: false_id,
integers_bdd_id: false_id,
strings_bdd_id: false_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
# Add other kinds as needed, e.g., for abstract types
}
end
@doc """
Returns a `Descr` map representing the universal type (Any).
All BDD IDs in this `Descr` point to the canonical `true` BDD node.
The `typing_ctx` is passed for consistency but not modified by this function.
"""
def any_descr(_typing_ctx) do
true_id = BDD.true_node_id()
%{
atoms_bdd_id: true_id,
integers_bdd_id: true_id,
strings_bdd_id: true_id,
pairs_bdd_id: true_id,
records_bdd_id: true_id,
functions_bdd_id: true_id,
# For 'Any', absence is typically not included unless explicitly modeled.
# If 'Any' should include the possibility of absence, this would be true_id.
# For now, let's assume 'Any' means any *value*, so absence is false.
# This can be refined based on the desired semantics of 'Any'.
# CDuce 'Any' does not include 'Absent'.
absent_marker_bdd_id: BDD.false_node_id()
}
end
end

View File

@ -1,305 +0,0 @@
defmodule Tilly.Type.Ops do
@moduledoc """
Implements set-theoretic operations on Type Descriptors (`Descr` maps)
and provides helper functions for constructing specific types.
Operations work with interned `Descr` IDs.
"""
alias Tilly.BDD
alias Tilly.Type
alias Tilly.Type.Store
# Defines the fields in a Descr map that hold BDD IDs.
# Order can be relevant if specific iteration order is ever needed, but for field-wise ops it's not.
defp descr_fields do
[
:atoms_bdd_id,
:integers_bdd_id,
:strings_bdd_id,
:pairs_bdd_id,
:records_bdd_id,
:functions_bdd_id,
:absent_marker_bdd_id
]
end
# --- Core Set Operations ---
@doc """
Computes the union of two types represented by their `Descr` IDs.
Returns `{new_typing_ctx, result_descr_id}`.
"""
def union_types(typing_ctx, descr1_id, descr2_id) do
apply_type_op(typing_ctx, :union, descr1_id, descr2_id)
end
@doc """
Computes the intersection of two types represented by their `Descr` IDs.
Returns `{new_typing_ctx, result_descr_id}`.
"""
def intersection_types(typing_ctx, descr1_id, descr2_id) do
apply_type_op(typing_ctx, :intersection, descr1_id, descr2_id)
end
@doc """
Computes the negation of a type represented by its `Descr` ID.
Returns `{new_typing_ctx, result_descr_id}`.
"""
def negation_type(typing_ctx, descr_id) do
apply_type_op(typing_ctx, :negation, descr_id, nil)
end
defp do_union_types(typing_ctx, descr1_id, descr2_id) do
descr1 = Store.get_descr_by_id(typing_ctx, descr1_id)
descr2 = Store.get_descr_by_id(typing_ctx, descr2_id)
{final_ctx, result_fields_map} =
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
bdd1_id = Map.get(descr1, field)
bdd2_id = Map.get(descr2, field)
{new_ctx, result_bdd_id} = BDD.Ops.union_bdds(current_ctx, bdd1_id, bdd2_id)
{new_ctx, Map.put(acc_fields, field, result_bdd_id)}
end)
Store.get_or_intern_descr(final_ctx, result_fields_map)
end
defp do_intersection_types(typing_ctx, descr1_id, descr2_id) do
descr1 = Store.get_descr_by_id(typing_ctx, descr1_id)
descr2 = Store.get_descr_by_id(typing_ctx, descr2_id)
{final_ctx, result_fields_map} =
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
bdd1_id = Map.get(descr1, field)
bdd2_id = Map.get(descr2, field)
{new_ctx, result_bdd_id} = BDD.Ops.intersection_bdds(current_ctx, bdd1_id, bdd2_id)
{new_ctx, Map.put(acc_fields, field, result_bdd_id)}
end)
Store.get_or_intern_descr(final_ctx, result_fields_map)
end
defp do_negation_type(typing_ctx, descr_id) do
descr = Store.get_descr_by_id(typing_ctx, descr_id)
{final_ctx, result_fields_map} =
Enum.reduce(descr_fields(), {typing_ctx, %{}}, fn field, {current_ctx, acc_fields} ->
bdd_id = Map.get(descr, field)
{ctx_after_neg, result_bdd_id} =
if field == :absent_marker_bdd_id do
{current_ctx, BDD.false_node_id()}
else
BDD.Ops.negation_bdd(current_ctx, bdd_id)
end
{ctx_after_neg, Map.put(acc_fields, field, result_bdd_id)}
end)
# Re-evaluate context threading if BDD ops significantly alter it beyond caching during reduce.
# The primary context update happens with Store.get_or_intern_descr.
# The reduce passes current_ctx, which accumulates cache updates from BDD ops.
Store.get_or_intern_descr(final_ctx, result_fields_map)
end
# --- Caching Wrapper for Type Operations ---
defp apply_type_op(typing_ctx, op_key, descr1_id, descr2_id) do
cache_key = make_type_op_cache_key(op_key, descr1_id, descr2_id)
type_store = Map.get(typing_ctx, :type_store)
case Map.get(type_store.ops_cache, cache_key) do
nil ->
# Not in cache, compute it
{new_typing_ctx, result_id} =
case op_key do
:union -> do_union_types(typing_ctx, descr1_id, descr2_id)
:intersection -> do_intersection_types(typing_ctx, descr1_id, descr2_id)
:negation -> do_negation_type(typing_ctx, descr1_id) # descr2_id is nil here
_ -> raise "Unsupported type op_key: #{op_key}"
end
# Store in cache (important: use new_typing_ctx to get potentially updated type_store)
current_type_store_after_op = Map.get(new_typing_ctx, :type_store)
new_ops_cache = Map.put(current_type_store_after_op.ops_cache, cache_key, result_id)
final_type_store_with_cache = %{current_type_store_after_op | ops_cache: new_ops_cache}
# And put this updated type_store back into new_typing_ctx
final_typing_ctx_with_cache = Map.put(new_typing_ctx, :type_store, final_type_store_with_cache)
{final_typing_ctx_with_cache, result_id}
cached_result_id ->
{typing_ctx, cached_result_id}
end
end
defp make_type_op_cache_key(:negation, descr_id, nil), do: {:negation, descr_id}
defp make_type_op_cache_key(op_key, id1, id2) when op_key in [:union, :intersection] do
if id1 <= id2, do: {op_key, id1, id2}, else: {op_key, id2, id1}
end
defp make_type_op_cache_key(op_key, id1, id2), do: {op_key, id1, id2}
# --- Utility Functions ---
@doc """
Checks if a type represented by its `Descr` ID is the empty type (Nothing).
Does not modify `typing_ctx`.
"""
def is_empty_type?(typing_ctx, descr_id) do
descr_map = Store.get_descr_by_id(typing_ctx, descr_id)
Enum.all?(descr_fields(), fn field ->
bdd_id = Map.get(descr_map, field)
BDD.is_false_node?(typing_ctx, bdd_id)
end)
end
# --- Construction Helper Functions ---
@doc """
Gets the `Descr` ID for the canonical 'Nothing' type.
"""
def get_type_nothing(typing_ctx) do
empty_descr_map = Type.empty_descr(typing_ctx)
Store.get_or_intern_descr(typing_ctx, empty_descr_map)
end
@doc """
Gets the `Descr` ID for the canonical 'Any' type.
"""
def get_type_any(typing_ctx) do
any_descr_map = Type.any_descr(typing_ctx)
Store.get_or_intern_descr(typing_ctx, any_descr_map)
end
@doc """
Creates a type `Descr` ID representing a single atom literal.
"""
def create_atom_literal_type(typing_ctx, atom_value) when is_atom(atom_value) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id()
# Create a BDD for the single atom: Split(atom_value, True, False, False)
# The ops_module Tilly.BDD.AtomBoolOps is crucial here.
{ctx1, atom_bdd_id} =
BDD.Ops.split(typing_ctx, atom_value, true_id, false_id, false_id, Tilly.BDD.AtomBoolOps)
descr_map = %{
atoms_bdd_id: atom_bdd_id,
integers_bdd_id: false_id,
strings_bdd_id: false_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(ctx1, descr_map)
end
@doc """
Creates a type `Descr` ID representing a single integer literal.
"""
def create_integer_literal_type(typing_ctx, integer_value) when is_integer(integer_value) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id()
{ctx1, integer_bdd_id} =
BDD.Ops.split(typing_ctx, integer_value, true_id, false_id, false_id, Tilly.BDD.IntegerBoolOps)
descr_map = %{
atoms_bdd_id: false_id,
integers_bdd_id: integer_bdd_id,
strings_bdd_id: false_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(ctx1, descr_map)
end
@doc """
Creates a type `Descr` ID representing a single string literal.
"""
def create_string_literal_type(typing_ctx, string_value) when is_binary(string_value) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id()
{ctx1, string_bdd_id} =
BDD.Ops.split(typing_ctx, string_value, true_id, false_id, false_id, Tilly.BDD.StringBoolOps)
descr_map = %{
atoms_bdd_id: false_id,
integers_bdd_id: false_id,
strings_bdd_id: string_bdd_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(ctx1, descr_map)
end
@doc """
Gets the `Descr` ID for the type representing all atoms.
"""
def get_primitive_type_any_atom(typing_ctx) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id() # This BDD must be interned with :atom_bool_ops if it's not universal
# For a BDD representing "all atoms", its structure is simply True,
# but it must be associated with :atom_bool_ops.
# BDD.true_node_id() is universal. If we need a specific "true for atoms",
# we'd intern it: BDD.get_or_intern_node(ctx, Node.mk_true(), :atom_bool_ops)
# However, BDD.Ops functions fetch ops_module from operands.
# Universal true/false should work correctly.
descr_map = %{
atoms_bdd_id: true_id,
integers_bdd_id: false_id,
strings_bdd_id: false_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(typing_ctx, descr_map)
end
@doc """
Gets the `Descr` ID for the type representing all integers.
"""
def get_primitive_type_any_integer(typing_ctx) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id()
descr_map = %{
atoms_bdd_id: false_id,
integers_bdd_id: true_id,
strings_bdd_id: false_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(typing_ctx, descr_map)
end
@doc """
Gets the `Descr` ID for the type representing all strings.
"""
def get_primitive_type_any_string(typing_ctx) do
false_id = BDD.false_node_id()
true_id = BDD.true_node_id()
descr_map = %{
atoms_bdd_id: false_id,
integers_bdd_id: false_id,
strings_bdd_id: true_id,
pairs_bdd_id: false_id,
records_bdd_id: false_id,
functions_bdd_id: false_id,
absent_marker_bdd_id: false_id
}
Store.get_or_intern_descr(typing_ctx, descr_map)
end
end

View File

@ -1,79 +0,0 @@
defmodule Tilly.Type.Store do
@moduledoc """
Manages the interning (hash-consing) of Type Descriptor maps (`Descr` maps).
Ensures that for any unique `Descr` map, there is one canonical integer ID.
The type store is expected to be part of a `typing_ctx` map under the key `:type_store`.
"""
@initial_next_descr_id 0
@doc """
Initializes the type store within the typing context.
"""
def init_type_store(typing_ctx) when is_map(typing_ctx) do
type_store = %{
descrs_by_structure: %{},
structures_by_id: %{},
next_descr_id: @initial_next_descr_id,
ops_cache: %{} # Cache for type operations {op_key, descr_id1, descr_id2} -> result_descr_id
}
Map.put(typing_ctx, :type_store, type_store)
end
@doc """
Gets an existing Type Descriptor ID or interns a new one if it's not already in the store.
All BDD IDs within the `descr_map` must already be canonical integer IDs.
Returns a tuple `{new_typing_ctx, descr_id}`.
The `typing_ctx` is updated if a new `Descr` is interned.
"""
def get_or_intern_descr(typing_ctx, descr_map) do
type_store = Map.get(typing_ctx, :type_store)
unless type_store do
raise ArgumentError, "Type store not initialized in typing_ctx. Call init_type_store first."
end
# The descr_map itself is the key for interning.
# Assumes BDD IDs within descr_map are already canonical.
case Map.get(type_store.descrs_by_structure, descr_map) do
nil ->
# Descr not found, intern it
descr_id = type_store.next_descr_id
new_descrs_by_structure = Map.put(type_store.descrs_by_structure, descr_map, descr_id)
new_structures_by_id = Map.put(type_store.structures_by_id, descr_id, descr_map)
new_next_descr_id = descr_id + 1
new_type_store =
%{
type_store
| descrs_by_structure: new_descrs_by_structure,
structures_by_id: new_structures_by_id,
next_descr_id: new_next_descr_id
}
new_typing_ctx = Map.put(typing_ctx, :type_store, new_type_store)
{new_typing_ctx, descr_id}
existing_descr_id ->
# Descr found
{typing_ctx, existing_descr_id}
end
end
@doc """
Retrieves the `Descr` map from the type store given its ID.
Returns the `Descr` map or `nil` if not found.
"""
def get_descr_by_id(typing_ctx, descr_id) do
with %{type_store: %{structures_by_id: structures_by_id}} <- typing_ctx,
descr when not is_nil(descr) <- Map.get(structures_by_id, descr_id) do
descr
else
_ -> nil
end
end
end

View File

@ -26,6 +26,8 @@ defmodule Til.MixProject do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"}
{:jason, "~> 1.4"},
{:ex_unit_summary, "~> 0.1.0", only: [:dev, :test]}
]
end
end

4
mix.lock Normal file
View File

@ -0,0 +1,4 @@
%{
"ex_unit_summary": {:hex, :ex_unit_summary, "0.1.0", "7b0352afc5e6a933c805df0a539b66b392ac12ba74d8b208db7d83f77cb57049", [:mix], [], "hexpm", "8c87d0deade3657102902251d2ec60b5b94560004ce0e2c2fa5b466232716bd6"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
}

2501
new.exs Normal file

File diff suppressed because it is too large Load Diff

243
output.md Normal file
View File

@ -0,0 +1,243 @@
--- Running Tdd.TypeSpec.normalize/1 Tests ---
--- Section: Base & Simple Types ---
[PASS] Normalizing :any is idempotent
[PASS] Normalizing :none is idempotent
[PASS] Normalizing :atom is idempotent
[PASS] Normalizing a literal is idempotent
--- Section: Double Negation ---
[PASS] ¬(¬atom) simplifies to atom
[PASS] A single negation is preserved
[PASS] ¬(¬(¬atom)) simplifies to ¬atom
--- Section: Union Normalization ---
[PASS] Flattens nested unions
[PASS] Sorts members of a union
[PASS] Removes duplicates in a union
[PASS] Simplifies a union with :none (A | none -> A)
[PASS] Simplifies a union with :any (A | any -> any)
[PASS] An empty union simplifies to :none
[PASS] A union containing only :none simplifies to :none
[PASS] A union of a single element simplifies to the element itself
--- Section: Intersection Normalization ---
[PASS] Flattens nested intersections
[PASS] Sorts members of an intersection
[PASS] Removes duplicates in an intersection
[PASS] Simplifies an intersection with :any (A & any -> A)
[PASS] Simplifies an intersection with :none (A & none -> none)
[PASS] An empty intersection simplifies to :any
[PASS] An intersection of a single element simplifies to the element itself
--- Section: Recursive Normalization ---
[PASS] Recursively normalizes elements in a tuple
[PASS] Recursively normalizes head and tail in a cons
[PASS] Recursively normalizes element in list_of
[PASS] Recursively normalizes sub-spec in negation
--- Section: Complex Nested Cases ---
[PASS] Handles complex nested simplifications correctly
✅ All TypeSpec tests passed!
--- Running Tdd.Store Tests ---
--- Section: Initialization and Terminals ---
[PASS] true_node_id returns 1
[PASS] false_node_id returns 0
[PASS] get_node for ID 1 returns true_terminal
[PASS] get_node for ID 0 returns false_terminal
[PASS] get_node for unknown ID returns not_found
--- Section: Node Creation and Structural Sharing ---
[PASS] First created node gets ID 2
[PASS] get_node for ID 2 returns the correct tuple
[PASS] Second created node gets ID 3
[PASS] Attempting to create an existing node returns the same ID (Structural Sharing)
[PASS] Next new node gets the correct ID (4)
--- Section: Basic Reduction Rule ---
[PASS] A node with identical children reduces to the child's ID
--- Section: Caching ---
[PASS] Cache is initially empty for a key
[PASS] Cache returns the stored value after put
[PASS] Cache can be updated
✅ All Tdd.Store tests passed!
--- Running Tdd.Variable Tests ---
--- Section: Variable Structure ---
[PASS] v_is_atom returns correct tuple
[PASS] v_atom_eq returns correct tuple
[PASS] v_int_lt returns correct tuple
[PASS] v_tuple_size_eq returns correct tuple
[PASS] v_tuple_elem_pred nests a variable correctly
[PASS] v_list_is_empty returns correct tuple
[PASS] v_list_head_pred nests a variable correctly
--- Section: Global Ordering (Based on Elixir Term Comparison) ---
[PASS] Primary type var < Atom property var
[PASS] Integer :lt var < Integer :eq var
[PASS] Integer :eq var < Integer :gt var
[PASS] Integer :eq(5) var < Integer :eq(10) var
[PASS] Tuple elem(0) var < Tuple elem(1) var
[PASS] Tuple elem(0, atom) var < Tuple elem(0, int) var
Variable.v_list_is_empty(): {5, :b_is_empty, nil, nil}
[PASS] List :b_is_empty var < List :c_head var
[PASS] List :c_head var < List :tail var
✅ All Tdd.Variable tests passed!
--- Running Tdd.Algo & Tdd.Consistency.Engine Tests ---
--- Section: Algo.negate ---
[PASS] negate(true) is false
[PASS] negate(false) is true
[PASS] negate(negate(t_atom)) is t_atom
--- Section: Algo.apply (raw structural operations) ---
[PASS] Structure of 'atom | int' is correct
[PASS] :foo & :bar (raw) is not the false node
--- Section: Algo.simplify (with Consistency.Engine) ---
[PASS] Simplifying under contradictory assumptions (atom & int) results in false
[PASS] Simplifying 'integer' given 'value==:foo' results in false
[PASS] Simplifying 'atom & int' results in false
[PASS] Simplifying 'atom | int' given 'is_atom==true' results in true
[PASS] Simplifying 'atom | int' given 'is_atom==false' results in 'integer'
✅ All Tdd.Algo tests passed!
--- Running Tdd.Consistency.Engine Tests ---
--- Section: Basic & Implication Tests ---
[PASS] An empty assumption map is consistent
[PASS] A single valid assumption is consistent
[PASS] An implied contradiction is caught by expander
[PASS] An implied contradiction is caught by expander
[PASS] Implication creates a consistent set
--- Section: Primary Type Exclusivity ---
[PASS] Two primary types cannot both be true
[PASS] Two primary types implied to be true is a contradiction
[PASS] One primary type true and another false is consistent
--- Section: Atom Consistency ---
[PASS] An atom cannot equal two different values
[PASS] An atom can equal one value
--- Section: List Flat Consistency ---
[PASS] A list cannot be empty and have a head property
[PASS] A non-empty list can have a head property
[PASS] A non-empty list is implied by head property
--- Section: Integer Consistency ---
[PASS] int == 5 is consistent
[PASS] int == 5 AND int == 10 is a contradiction
[PASS] int < 10 AND int > 20 is a contradiction
[PASS] int > 5 AND int < 4 is a contradiction
[PASS] int > 5 AND int < 7 is consistent
[PASS] int == 5 AND int < 3 is a contradiction
[PASS] int == 5 AND int > 10 is a contradiction
[PASS] int == 5 AND int > 3 is consistent
✅ All Consistency.Engine tests passed!
--- Running Tdd.TypeReconstructor Tests ---
--- Section: Basic Flat Reconstructions ---
[PASS] is_atom=true -> atom
[PASS] is_atom=false -> ¬atom
[PASS] is_atom=true AND value==:foo -> :foo
[PASS] is_atom=true AND value!=:foo -> atom & ¬:foo
[PASS] is_integer=true AND int==5 -> 5
[PASS] is_list=true AND is_empty=true -> []
--- Section: Combined Flat Reconstructions ---
[PASS] int > 10 AND int < 20
--- Section: Recursive Reconstructions ---
[PASS] head is an atom
✅ All TypeReconstructor tests passed!
--- Running Compiler & Algo Integration Tests ---
--- Section: Basic Equivalences ---
[PASS] atom & any == atom
[PASS] atom | none == atom
[PASS] atom & int == none
[PASS] ¬(¬atom) == atom
[PASS] atom | atom == atom
--- Section: Basic Subtyping ---
[PASS] :foo <: atom
[PASS] atom <: :foo
[PASS] :foo <: integer
[PASS] int==5 <: integer
[PASS] none <: atom
[PASS] atom <: any
--- Section: Integer Range Logic ---
[PASS] range(7..8) <: range(5..10)
[PASS] range(5..10) <: range(7..8)
[PASS] range(5..10) <: range(15..20)
[PASS] range(5..10) & range(7..8) == range(7..8)
[PASS] range(5..10) & range(0..100) == range(5..10)
[PASS] range(5..10) | range(7..8) == range(5..10)
--- Section: Contradictions & Simplifications ---
[PASS] atom & integer
[PASS] :foo & :bar
[PASS] atom & (int==5)
[PASS] range(5..10) & range(15..20)
[PASS] integer & ¬integer
--- Section: Subtype Reduction Logic ---
[PASS] (:foo | :bar | atom) simplifies to atom
[PASS] (range(5..10) | integer) simplifies to integer
[PASS] (:foo & atom) simplifies to :foo
[PASS] (range(5..10) & integer) simplifies to range(5..10)
--- Section: Logical Laws ---
[PASS] De Morgan's (¬(A|B) == ¬A & ¬B) holds
[PASS] De Morgan's (¬(A&B) == ¬A | ¬B) holds
[PASS] Distributive Law (A & (B|C)) holds
✅ All Compiler & Algo Integration tests passed!
--- Running Tdd.Compiler Recursive Type Tests ---
--- Section: :cons ---
[PASS] :cons is a subtype of :list
[PASS] :cons is not a subtype of the empty list
[PASS] cons(integer, list) is a subtype of cons(any, any)
[PASS] cons(any, any) is not a subtype of cons(integer, list)
--- Section: :tuple ---
[PASS] {:tuple, [atom, int]} is a subtype of :tuple
[PASS] {:tuple, [atom, int]} is not a subtype of :list
[PASS] a tuple of size 2 is not a subtype of a tuple of size 3
[PASS] subtype check works element-wise (specific <: general)
[PASS] subtype check works element-wise (general </: specific)
[PASS] subtype check works element-wise (specific </: unrelated)
--- Section: :list_of ---
[PASS] list_of(E) is a subtype of list
[PASS] empty list is a subtype of any list_of(E)
[PASS] list_of(subtype) is a subtype of list_of(supertype)
[FAIL] list_of(supertype) is not a subtype of list_of(subtype)
Expected: false
Got: true
[PASS] a list with a wrong element type is not a subtype of list_of(E)
[FAIL] a list with a correct element type is a subtype of list_of(E)
Expected: true
Got: false
--- Section: Equivalence ---
[PASS] the recursive definition holds: list_of(E) == [] | cons(E, list_of(E))
[PASS] intersection of two list_of types works correctly

284
refactor.md Normal file
View File

@ -0,0 +1,284 @@
## Motivation for Architectural Evolution of the TDD-based Type System
**To:** Project Stakeholders & Technical Team
**From:** System Architect
**Date:** October 26, 2023
**Subject:** A Principled Path Forward for Stability, Extensibility, and Polymorphism
### 1. Introduction & Current Success
The existing Ternary Decision Diagram (TDD) implementation has been remarkably successful in proving the core value of a set-theoretic approach to types. By representing types as canonical graphs, we have achieved powerful and efficient operations for union, intersection, negation, and subtyping. The system's ability to automatically simplify complex type expressions like `(atom | tuple) & (tuple | :foo)` into a canonical form `(tuple | :foo)` is a testament to the power of this model.
However, as we scale the system to represent more complex, real-world data structures—specifically recursive types like `list(X)`—a foundational architectural issue has been identified. This issue currently compromises the system's correctness and severely limits its future potential.
This document motivates a series of architectural changes designed to address this issue, creating a robust foundation for future growth, including the long-term goal of supporting polymorphism.
### 2. The Core Challenge: The Unstable Variable Problem
The correctness of an Ordered TDD relies on a **fixed, global, total order of all predicate variables**. Our current implementation for recursive types like `list_of(X)` violates this principle.
- **The Flaw:** We currently construct predicate variables by embedding the integer TDD ID of the element type `X`. For instance, `list_of(atom)` might use the variable `{5, :all_elements, 4}` if `type_atom()` resolves to TDD ID `4`.
- **The Consequence:** TDD IDs are artifacts of construction order; they are not stable. If `type_integer()` is created before `type_atom()`, their IDs will swap, and the global variable order will change. This means that **semantically identical types can produce structurally different TDDs**, breaking the canonical representation that is the central promise of our system. This is not a theoretical concern; it is a critical bug that invalidates equivalence checks and subtyping operations under different run conditions.
This instability is a symptom of a deeper issue: we are conflating a type's **logical description** with its **compiled, set-theoretic representation**.
### 3. The Proposed Solution: A Separation of Concerns
To solve this, we will introduce a principled separation between the "what" and the "how" of our types.
1. **`TypeSpec`: The Logical "What"**
We will introduce a new, stable, declarative data structure called a `TypeSpec`. A `TypeSpec` is a structural description of a type (e.g., `{:list_of, :atom}`, `{:union, [:integer, :atom]}`). This becomes the **new public API and canonical language** for defining types. It is human-readable, stable, and completely independent of the TDD implementation.
2. **TDD ID: The Compiled "How"**
The TDD graph and its ID will be treated as a **compiled artifact**. It is the powerful, efficient, set-theoretic representation of a concrete `TypeSpec`. The `Tdd` module will evolve into a low-level "backend" or "compiler engine."
This separation yields immediate and long-term benefits:
- **Guaranteed Correctness & Stability:** By using the stable `TypeSpec` within our TDD variables (e.g., `{5, :all_elements, {:base, :atom}}`), we restore a fixed global variable order. This guarantees that our TDDs are once again truly canonical, ensuring the reliability of all type operations.
- **Clean Architecture & API:** The public-facing API will be vastly improved, operating on clear, declarative `TypeSpec`s instead of opaque integer IDs. The internal complexity of the TDD machinery is encapsulated behind a new `Compiler` module, improving maintainability.
- **Unlocking Future Capabilities:** This architecture is not just a bug fix; it is an enabling refactor. A system that "compiles" a `TypeSpec` to a TDD is naturally extensible to polymorphism. The `TypeSpec` for `list(A)` can be represented as `{:list_of, {:type_var, :A}}`. A higher-level type inference engine can then operate on these specs, substituting variables and using our TDD compiler for concrete subtype checks. This provides a clear, principled path to our goal of supporting polymorphic functions and data structures.
### 4. Phased Implementation Plan
We will roll out these changes in three deliberate phases to manage complexity and deliver value incrementally.
- **Phase 1: Stabilize the Core.** Introduce `TypeSpec` and the `Compiler` to fix the unstable variable problem. This is the highest priority, as it ensures the correctness of our existing feature set.
- **Phase 2: Refine the Public API.** Build a new, high-level `TypeSystem` module that operates exclusively on `TypeSpec`s, providing a clean and intuitive interface for users of our library.
- **Phase 3: Introduce Polymorphism.** With the stable foundation in place, implement the unification and substitution logic required to handle `TypeSpec`s containing type variables, enabling the checking of polymorphic function schemas.
### 5. Conclusion
The current TDD system is a powerful proof of concept that is on the verge of becoming a robust, general-purpose tool. By addressing the foundational "unstable variable" problem through a principled separation of a type's specification from its implementation, we not only fix a critical bug but also pave a clear and logical path toward our most ambitious goals. This architectural evolution is a necessary investment in the long-term stability, correctness, and extensibility of our type system.
### Phase 1: Fix the Core Instability & Introduce `TypeSpec`
This phase is non-negotiable. It makes your current system sound and stable.
**1. Introduce the `TypeSpec` Data Structure:**
This will be the new "source of truth" for defining types. It's a stable, structural representation. This becomes the primary vocabulary for your public API.
```elixir
# In a new file, e.g., typespec.ex
defmodule Tdd.TypeSpec do
@typedoc """
A stable, structural representation of a type.
This is the primary way users and the higher-level system will describe types.
"""
@type t ::
# Base Types
:any
| :none
| :atom
| :integer
| :tuple # any tuple
| :list # any list
# Literal Types (canonical form of a value)
| {:literal, term()}
# Set-theoretic Combinators
| {:union, [t()]}
| {:intersect, [t()]}
| {:negation, t()}
# Parameterized Structural Types
| {:tuple, [t()]} # A tuple with specific element types
| {:cons, t(), t()} # A non-empty list [H|T]
| {:list_of, t()} # A list of any length where elements are of a type
# Integer Range (example of specific properties)
| {:integer_range, integer() | :neg_inf, integer() | :pos_inf}
# Polymorphic Variable (for Phase 3)
| {:type_var, atom()}
end
```
**2. Create a "Compiler": `Tdd.Compiler.spec_to_id/1`**
This is the new heart of your system. It's a memoized function that takes a `TypeSpec` and returns a canonical TDD ID. Your existing `Tdd` module becomes the "backend" for this compiler.
```elixir
# In a new file, e.g., compiler.ex
defmodule Tdd.Compiler do
# This module manages the cache from spec -> id
# Process.put(:spec_to_id_cache, %{})
def spec_to_id(spec) do
# 1. Normalize the spec (e.g., sort unions) to improve cache hits.
normalized_spec = Tdd.TypeSpec.normalize(spec)
# 2. Check cache for `normalized_spec`. If found, return ID.
# 3. If not found, compile it by calling private helpers.
id =
case normalized_spec do
:atom -> Tdd.type_atom()
:integer -> Tdd.type_integer()
{:literal, val} -> build_literal_tdd(val)
{:union, specs} ->
ids = Enum.map(specs, &spec_to_id/1)
Enum.reduce(ids, Tdd.type_none(), &Tdd.sum/2)
# ... and so on ...
{:list_of, element_spec} -> build_list_of_tdd(element_spec)
end
# 4. Cache and return the new ID.
id
end
# The key fix is here:
defp build_list_of_tdd(element_spec) do
# A) Compile the element spec to get its ID for semantic checks.
element_id = spec_to_id(element_spec)
# B) Create the TDD variable using the STABLE element_spec.
# This is the critical change.
stable_var = Tdd.v_list_all_elements_are(element_spec)
# C) Use Tdd module to build the node, passing both the stable
# variable and the compiled ID for the consistency checker.
# (This requires a small change in the Tdd module).
Tdd.type_list_of_internal(stable_var, element_id)
end
end
```
**3. Modify the `Tdd` Module to be a "Low-Level Backend":**
The public API of `Tdd` shrinks. Its main users are now `Tdd.Compiler` and `Tdd.TypeSystem` (Phase 2).
* **Change Variable Constructors:** `v_list_all_elements_are` now takes a `TypeSpec`.
```elixir
# In Tdd module
# The variable now contains the stable TypeSpec, not a volatile ID.
def v_list_all_elements_are(element_spec), do: {5, :all_elements, element_spec}
```
* **Update `check_assumptions_consistency`:** This function now needs access to the compiler to handle the new variable format.
```elixir
# In Tdd module, inside the consistency checker
# ... when it encounters a `v_list_all_elements_are` assumption...
{{5, :all_elements, element_spec}, true} ->
# It needs to know the TDD ID for this spec to do subtyping checks.
ambient_type_for_head = Tdd.Compiler.spec_to_id(element_spec)
# ... rest of the logic proceeds as before ...
```
This creates a circular dependency (`Compiler` -> `Tdd` -> `Compiler`), which is a sign of deep coupling. It's acceptable for now, but a sign that this logic might eventually belong in the compiler itself.
* **Refactor Type Constructors:** Your old `type_list_of`, `type_tuple`, etc., are either removed or renamed to `_internal` and used by the new `Compiler`.
---
### Phase 2: Build the High-Level Type System API
This is the new public-facing interface. It operates on `TypeSpec`s and uses the `Compiler` and `Tdd` modules as its engine.
```elixir
# In a new file, e.g., type_system.ex
defmodule TypeSystem do
alias Tdd.TypeSpec, as: Spec
# --- Public Type Constructors (using TypeSpecs) ---
def type_atom, do: :atom
def type_integer, do: :integer
def type_union(specs), do: {:union, specs}
def type_list_of(spec), do: {:list_of, spec}
# ... and so on ...
# --- Public Operations ---
@doc """
Checks if spec1 is a subtype of spec2.
"""
def is_subtype(spec1, spec2) do
# 1. Compile both specs to their canonical TDD IDs.
id1 = Tdd.Compiler.spec_to_id(spec1)
id2 = Tdd.Compiler.spec_to_id(spec2)
# 2. Use the fast, low-level TDD check.
Tdd.is_subtype(id1, id2)
end
def intersect(spec1, spec2) do
# This is a choice:
# Option A (easy): Just return {:intersect, [spec1, spec2]} and let the compiler
# handle it lazily when `spec_to_id` is called.
# Option B (better): Do some immediate simplifications here before compiling.
# e.g., intersect(:atom, :integer) -> :none
{:intersect, [spec1, spec2]}
end
end
```
**At the end of Phase 2, you have a sound, stable, and extensible set-theoretic type checker.** The "god function" problem still exists in `check_assumptions_consistency`, but the system is no longer fundamentally broken.
---
### Phase 3: Introduce Polymorphism
Now you leverage the `TypeSpec` architecture.
**1. Define Polymorphic Function Schemas:**
These are data structures, living outside the TDD world.
```elixir
# A function schema
@type fun_schema :: {:function,
forall: [:atom], # list of quantified type variables
args: [Spec.t()], # argument types (can use quantified vars)
return: Spec.t() # return type (can use quantified vars)
}
# Example: Enum.map/2
# forall A, B. (list(A), (A -> B)) -> list(B)
map_schema = {:function,
forall: [:A, :B],
args: [
{:list_of, {:type_var, :A}},
{:function, forall: [], args: [{:type_var, :A}], return: {:type_var, :B}}
],
return: {:list_of, {:type_var, :B}}
}
```
**2. Implement a Unification/Substitution Engine:**
This is a new, separate module that operates *only on `TypeSpec`s and schemas*.
```elixir
defmodule Type.Unify do
# Takes a spec with variables and a concrete spec, returns a substitution map
# or :error.
# unify({:list_of, {:type_var, :A}}, {:list_of, :integer}) -> {:ok, %{A: :integer}}
def unify(polymorphic_spec, concrete_spec) do
# ... recursive pattern matching logic ...
end
# Applies a substitution map to a spec.
# substitute({:list_of, {:type_var, :A}}, %{A: :integer}) -> {:list_of, :integer}
def substitute(spec, substitution_map) do
# ... recursive substitution logic ...
end
end
```
**3. Build the Type Checker for Function Calls:**
This is the final piece that ties everything together.
```elixir
# In TypeSystem module
def check_function_call(function_schema, concrete_arg_specs) do
# 1. Unify the schema's arg specs with the concrete arg specs.
# This will produce a substitution map, e.g., %{A: :integer, B: :atom}.
{:ok, substitution_map} = Unify.unify(function_schema.args, concrete_arg_specs)
# 2. Check that the concrete args are valid subtypes.
# (This step is actually part of unification itself).
# 3. Infer the concrete return type by substituting into the schema's return spec.
concrete_return_spec = Unify.substitute(function_schema.return, substitution_map)
{:ok, concrete_return_spec}
end
```
This phased approach solves your immediate problem correctly, provides a clean public API, and builds the exact foundation needed to add a polymorphic layer on top without requiring another major architectural upheaval.

308
scratch.exs Normal file
View File

@ -0,0 +1,308 @@
defmodule Tdd.Debug do
@moduledoc """
Provides macros to wrap `def` and `defp` for simple function call/return tracing.
Logs arguments as a list and return values using `IO.inspect`.
"""
# --- Agent for Tracing State ---
@agent_name Tdd.Debug.StateAgent
def init_agent_if_needed do
case Process.whereis(@agent_name) do
nil -> Agent.start_link(fn -> MapSet.new() end, name: @agent_name)
_pid -> :ok
end
:ok
end
def add_traced_pid(pid) when is_pid(pid) do
init_agent_if_needed()
Agent.update(@agent_name, &MapSet.put(&1, pid))
end
def remove_traced_pid(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil -> :ok
agent_pid -> Agent.cast(agent_pid, fn state -> MapSet.delete(state, pid) end)
end
end
def is_pid_traced?(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil ->
false
agent_pid ->
try do
Agent.get(agent_pid, &MapSet.member?(&1, pid), :infinity)
rescue
_e in [Exit, ArgumentError] -> false
end
end
end
# --- Tracing Control Functions ---
@doc "Enables function call tracing for the current process."
def enable_tracing do
pid_to_trace = self()
add_traced_pid(pid_to_trace)
ref = Process.monitor(pid_to_trace)
Process.spawn(fn ->
receive do
{:DOWN, ^ref, :process, ^pid_to_trace, _reason} ->
remove_traced_pid(pid_to_trace)
after
3_600_000 -> # 1 hour safety timeout
remove_traced_pid(pid_to_trace)
end
end, [:monitor])
:ok
end
@doc "Disables function call tracing for the current process."
def disable_tracing do
remove_traced_pid(self())
:ok
end
@doc "Runs the given 0-arity function with tracing enabled, then disables it."
def run(fun) when is_function(fun, 0) do
enable_tracing()
try do
fun.()
after
disable_tracing()
end
end
# --- Process Dictionary for Call Depth ---
defp get_depth, do: Process.get(:tdd_debug_depth, 0)
def increment_depth do
new_depth = get_depth() + 1
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
def decrement_depth do
new_depth = max(0, get_depth() - 1)
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
# --- Core Macro Logic ---
defmacro __using__(_opts) do
quote do
import Kernel, except: [def: 1, def: 2, defp: 1, defp: 2]
import Tdd.Debug
require Tdd.Debug
end
end
@doc false
defmacro def(call, clauses \\ Keyword.new()) do
generate_traced_function(:def, call, clauses, __CALLER__)
end
@doc false
defmacro defp(call, clauses \\ Keyword.new()) do
generate_traced_function(:defp, call, clauses, __CALLER__)
end
defp is_simple_variable_ast?(ast_node) do
case ast_node do
{var_name, _meta, _context} when is_atom(var_name) ->
var_name != :_
_ -> false
end
end
defp generate_traced_function(type, call_ast, clauses, caller_env) do
require Macro # Good practice
{function_name_ast, meta_call, original_args_patterns_ast_nullable} = call_ast
original_args_patterns_ast_list = original_args_patterns_ast_nullable || []
original_body_ast =
(if Keyword.keyword?(clauses) do
Keyword.get(clauses, :do, clauses)
else
clauses
end) || quote(do: nil)
mapped_and_generated_vars_tuples =
Enum.map(Enum.with_index(original_args_patterns_ast_list), fn {original_pattern_ast, index} ->
# __td_arg_N__ is for logging, make it hygienic with `nil` context (or __MODULE__)
td_arg_var = Macro.var(String.to_atom("__td_arg_#{index}__"), nil)
{final_pattern_for_head, rhs_for_td_arg_assignment} =
case original_pattern_ast do
# 1. Ignored variable: `_`
# AST: {:_, meta, context_module_or_nil}
{:_, _, _} = underscore_ast ->
{underscore_ast, quote(do: :__td_ignored_argument__)}
# 2. Assignment pattern: `var = pattern` or `var = _`
# AST: {:=, meta, [lhs, rhs_of_assign]}
{:=, _meta_assign, [lhs_of_assign, _rhs_of_assign]} = assignment_pattern_ast ->
if is_simple_variable_ast?(lhs_of_assign) do
{assignment_pattern_ast, lhs_of_assign} # Head uses `var = pattern`, log `var`
else
# LHS is complex (e.g., `%{key: v} = pattern`), capture the whole value.
captured_val_var = Macro.unique_var(String.to_atom("tdc_assign_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(assignment_pattern_ast) end
{new_head_pattern, captured_val_var}
end
# 3. Default argument: `pattern_before_default \\ default_value`
# AST: {:\|, meta, [pattern_before_default, default_value_ast]}
{:\\, _meta_default, [pattern_before_default, _default_value_ast]} = default_arg_pattern_ast ->
cond do
# 3a. `var \\ default`
is_simple_variable_ast?(pattern_before_default) ->
{default_arg_pattern_ast, pattern_before_default}
# 3b. `(var = inner_pattern) \\ default`
match?({:=, _, [lhs_inner_assign, _]}, pattern_before_default) and
is_simple_variable_ast?(pattern_before_default |> elem(2) |> Enum.at(0)) ->
{:=, _, [lhs_inner_assign, _]}= pattern_before_default
# `lhs_inner_assign` is the var on the left of `=`
{default_arg_pattern_ast, lhs_inner_assign}
# 3c. `(complex_pattern) \\ default` or `(_ = inner_pattern) \\ default` etc.
true ->
captured_val_var = Macro.unique_var(String.to_atom("tdc_def_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(default_arg_pattern_ast) end
{new_head_pattern, captured_val_var}
end
# 4. Simple variable `var` (checked using our helper)
# or other complex patterns/literals not caught above.
ast_node ->
if is_simple_variable_ast?(ast_node) do
{ast_node, ast_node} # Head uses `var`, log `var`
else
# It's a complex pattern (e.g., `%{a:x}`, `[h|t]`) or a literal not assignable to.
captured_val_var = Macro.unique_var(String.to_atom("tdc_pat_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(ast_node) end
{new_head_pattern, captured_val_var}
end
end
assignment_ast = quote do unquote(td_arg_var) = unquote(rhs_for_td_arg_assignment) end
{final_pattern_for_head, assignment_ast, td_arg_var}
end)
{new_args_patterns_for_head_list, assignments_for_logging_vars_ast_list, generated_vars_to_log_asts} =
if mapped_and_generated_vars_tuples == [],
do: {[], [], []},
else: mapped_and_generated_vars_tuples |> Enum.map(&Tuple.to_list(&1)) |> Enum.zip()|> Enum.map(&Tuple.to_list(&1))
|> then(fn [a, b, c] -> {a, b, c} end)
# Enum.unzip(mapped_and_generated_vars_tuples)
new_call_ast = {function_name_ast, meta_call, new_args_patterns_for_head_list}
traced_body_inner_ast =
quote do
unquote_splicing(assignments_for_logging_vars_ast_list)
if Tdd.Debug.is_pid_traced?(self()) do
current_print_depth = Tdd.Debug.increment_depth()
indent = String.duplicate(" ", current_print_depth - 1)
runtime_arg_values = [unquote_splicing(generated_vars_to_log_asts)]
actual_module_name_str = Atom.to_string(unquote(caller_env.module))
# The function_name_ast is resolved at macro expansion time.
# If it's `def foo(...)`, `unquote(function_name_ast)` becomes `:foo`.
# If `def unquote(name_var)(...)`, it resolves `name_var`.
resolved_fn_name = unquote(function_name_ast)
printable_function_name_str =
if is_atom(resolved_fn_name) do
Atom.to_string(resolved_fn_name)
else
Macro.to_string(resolved_fn_name) # For complex names / operators if AST passed
end
IO.puts(
"#{indent}CALL: #{actual_module_name_str}.#{printable_function_name_str}"
)
IO.puts(
"#{indent} ARGS: #{inspect(runtime_arg_values)}"
)
try do
result = unquote(original_body_ast)
_ = Tdd.Debug.decrement_depth()
IO.puts(
"#{indent}RETURN from #{actual_module_name_str}.#{printable_function_name_str}: #{inspect(result)}"
)
result
rescue
exception_class ->
error_instance = exception_class
stacktrace = __STACKTRACE__
_ = Tdd.Debug.decrement_depth()
IO.puts(
"#{indent}ERROR in #{actual_module_name_str}.#{printable_function_name_str}: #{inspect(error_instance)}"
)
reraise error_instance, stacktrace
end
else
unquote(original_body_ast)
end
end
final_definition_ast =
quote location: :keep do
Kernel.unquote(type)(
unquote(new_call_ast),
do: unquote(traced_body_inner_ast)
)
end
final_definition_ast
end
# --- TDD Graph Printing (Kept as it was, not directly related to call tracing simplification) ---
@doc "Prints a formatted representation of a TDD graph structure."
def print_tdd_graph(id, store_module \\ Tdd.Store) do
IO.puts("--- TDD Graph (ID: #{id}) ---")
do_print_tdd_node(id, 0, MapSet.new(), store_module)
IO.puts("------------------------")
end
defp do_print_tdd_node(id, indent_level, visited, store_module) do
prefix = String.duplicate(" ", indent_level)
if MapSet.member?(visited, id) do
IO.puts("#{prefix}ID #{id} -> [Seen, recursive link]")
:ok
else
new_visited = MapSet.put(visited, id)
case store_module.get_node(id) do # Assumes store_module.get_node/1 exists
{:ok, :true_terminal} -> IO.puts("#{prefix}ID #{id} -> TRUE")
{:ok, :false_terminal} -> IO.puts("#{prefix}ID #{id} -> FALSE")
{:ok, {var, y_id, n_id, dc_id}} ->
IO.puts("#{prefix}ID #{id}: IF #{inspect(var)}")
IO.puts("#{prefix} ├─ Yes (to ID #{y_id}):")
do_print_tdd_node(y_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} ├─ No (to ID #{n_id}):")
do_print_tdd_node(n_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} └─ DC (to ID #{dc_id}):")
do_print_tdd_node(dc_id, indent_level + 2, new_visited, store_module)
{:error, reason} ->
IO.puts("#{prefix}ID #{id}: ERROR - #{reason}")
end
end
end
end
defmodule Asd do
use Tdd.Debug
def kekistan(dupsko, {:sommething, _}) do
IO.inspect("inside ")
dupsko + 1
end
end
Process.sleep(100)
Tdd.Debug.enable_tracing()
# Asd.kekistan(1, 2)
Asd.kekistan(1, {:sommething, 2})

View File

@ -151,7 +151,8 @@ defmodule Til.TestHelpers do
{original_key, deep_strip_id(original_value, nodes_map)}
is_list(original_value) ->
{original_key, deep_strip_id(original_value, nodes_map)} # Handles lists of type defs
# Handles lists of type defs
{original_key, deep_strip_id(original_value, nodes_map)}
true ->
{original_key, original_value}
@ -165,7 +166,8 @@ defmodule Til.TestHelpers do
# Recursively call on elements for lists of type definitions
Enum.map(type_definition, &deep_strip_id(&1, nodes_map))
true -> # Literals, atoms, numbers, nil, etc. (leaf nodes in the type structure)
# Literals, atoms, numbers, nil, etc. (leaf nodes in the type structure)
true ->
type_definition
end
end

View File

@ -1 +1,6 @@
ExUnit.start()
ExUnitSummary.start(:normal, %ExUnitSummary.Config{filter_results: :failed, print_delay: 100})
# Add ExUnitSummary.Formatter to list of ExUnit's formatters.
ExUnit.configure(formatters: [ExUnit.CLIFormatter, ExUnitSummary.Formatter])

View File

@ -1,8 +1,617 @@
defmodule TilTest do
use ExUnit.Case
doctest Til
defmodule TddSystemTest do
# Most tests mutate Tdd.Store, so they cannot run concurrently.
use ExUnit.Case, async: false
test "greets the world" do
assert Til.hello() == :world
alias Tdd.TypeSpec
alias Tdd.Store
alias Tdd.Variable
alias Tdd.Compiler
alias Tdd.Consistency.Engine
alias Tdd.Algo
alias Tdd.Debug
# Helper to mimic the old test structure and provide better failure messages
# for spec comparisons.
defp assert_spec_normalized(expected, input_spec) do
result = TypeSpec.normalize(input_spec)
# The normalization process should produce a canonical, sorted form.
assert expected == result, """
Input Spec:
#{inspect(input_spec, pretty: true)}
Expected Normalized:
#{inspect(expected, pretty: true)}
Actual Normalized:
#{inspect(result, pretty: true)}
"""
end
# Helper to check for equivalence by comparing TDD IDs.
defmacro assert_equivalent_specs(spec1, spec2) do
quote do
id1 = Compiler.spec_to_id(unquote(spec1))
id2 = Compiler.spec_to_id(unquote(spec2))
# Debug.build_graph_map(id1, "assert_equivalent_specs id1")
# Debug.build_graph_map(id2, "assert_equivalent_specs id2")
Process.get(:tdd_node_by_id) |> IO.inspect(label: ":tdd_node_by_id")
Process.get(:tdd_nodes) |> IO.inspect(label: ":tdd_nodes")
IO.inspect("equvalent specs? #{id1} #{id2}")
assert id1 == id2
end
end
# Helper to check for subtyping using the TDD compiler.
defmacro assert_subtype(spec1, spec2) do
quote do
IO.inspect("assert is_subtype #{inspect(unquote(spec1))}, #{inspect(unquote(spec2))}")
is_subtype? = Compiler.is_subtype(unquote(spec1), unquote(spec2))
Process.get(:tdd_node_by_id) |> IO.inspect(label: ":tdd_node_by_id")
Process.get(:tdd_nodes) |> IO.inspect(label: ":tdd_nodes")
assert is_subtype?
end
end
defmacro refute_subtype(spec1, spec2) do
quote do
refute Compiler.is_subtype(unquote(spec1), unquote(spec2))
end
end
# Setup block that initializes the Tdd.Store before each test.
# This ensures that node IDs and caches are clean for every test case.
setup do
Tdd.Store.init()
:ok
end
# ---
# Tdd.Store Tests
# These tests validate the lowest-level state management of the TDD system.
# The Store is responsible for creating and storing the nodes of the decision diagram graph.
# ---
describe "Tdd.Store: Core state management for the TDD graph" do
@doc """
Tests that the store initializes with the correct, reserved IDs for the
terminal nodes representing TRUE (:any) and FALSE (:none).
"""
test "initialization and terminals" do
assert Store.true_node_id() == 1
assert Store.false_node_id() == 0
assert Store.get_node(1) == {:ok, :true_terminal}
assert Store.get_node(0) == {:ok, :false_terminal}
assert Store.get_node(99) == {:error, :not_found}
end
@doc """
Tests the core functionality of creating nodes. It verifies that new nodes receive
incrementing IDs and that requesting an identical node reuses the existing one
(structural sharing), which is fundamental to the efficiency of TDDs.
"""
test "node creation and structural sharing" do
var_a = {:is_atom}
var_b = {:is_integer}
true_id = Store.true_node_id()
false_id = Store.false_node_id()
# First created node gets ID 2 (after 0 and 1 are taken by terminals)
id1 = Store.find_or_create_node(var_a, true_id, false_id, false_id)
assert id1 == 2
assert Store.get_node(id1) == {:ok, {var_a, true_id, false_id, false_id}}
# Second, different node gets the next ID
id2 = Store.find_or_create_node(var_b, id1, false_id, false_id)
assert id2 == 3
# Creating the first node again returns the same ID, not a new one
id1_again = Store.find_or_create_node(var_a, true_id, false_id, false_id)
assert id1_again == id1
# Next new node gets the correct subsequent ID, proving no ID was wasted
id3 = Store.find_or_create_node(var_b, true_id, false_id, false_id)
assert id3 == 4
end
@doc """
Tests a key reduction rule: if a node's 'yes', 'no', and 'don't care' branches
all point to the same child node, the parent node is redundant and should be
replaced by the child node itself.
"""
test "node reduction rule for identical children" do
var_a = {:is_atom}
# from previous test logic
id3 = 4
id_redundant = Store.find_or_create_node(var_a, id3, id3, id3)
assert id_redundant == id3
end
@doc """
Tests the memoization cache for operations like 'apply', 'negate', etc.
This ensures that repeated operations with the same inputs do not trigger
redundant computations.
"""
test "operation caching" do
cache_key = {:my_op, 1, 2}
assert Store.get_op_cache(cache_key) == :not_found
Store.put_op_cache(cache_key, :my_result)
assert Store.get_op_cache(cache_key) == {:ok, :my_result}
Store.put_op_cache(cache_key, :new_result)
assert Store.get_op_cache(cache_key) == {:ok, :new_result}
end
end
# ---
# Tdd.TypeSpec.normalize/1 Tests
# These tests focus on ensuring the `normalize` function correctly transforms
# any TypeSpec into its canonical, simplified form.
# ---
describe "Tdd.TypeSpec.normalize/1: Base & Simple Types" do
@doc "Tests that normalizing already-simple specs doesn't change them (idempotency)."
test "normalizing :any is idempotent" do
assert_spec_normalized(:any, :any)
end
test "normalizing :none is idempotent" do
assert_spec_normalized(:none, :none)
end
test "normalizing :atom is idempotent" do
assert_spec_normalized(:atom, :atom)
end
test "normalizing a literal is idempotent" do
assert_spec_normalized({:literal, :foo}, {:literal, :foo})
end
end
describe "Tdd.TypeSpec.normalize/1: Double Negation" do
@doc "Tests the logical simplification that ¬(¬A) is equivalent to A."
test "¬(¬atom) simplifies to atom" do
assert_spec_normalized(:atom, {:negation, {:negation, :atom}})
end
@doc "Tests that a single negation is preserved when it cannot be simplified further."
test "A single negation is preserved" do
assert_spec_normalized({:negation, :integer}, {:negation, :integer})
end
@doc "Tests that an odd integer of negations simplifies to a single negation."
test "¬(¬(¬atom)) simplifies to ¬atom" do
assert_spec_normalized({:negation, :atom}, {:negation, {:negation, {:negation, :atom}}})
end
end
describe "Tdd.TypeSpec.normalize/1: Union Normalization" do
@doc """
Tests that unions are canonicalized by flattening nested unions, sorting the members,
and removing duplicates. e.g., `int | (list | atom | int)` becomes `(atom | int | list)`.
"""
test "flattens, sorts, and uniques members" do
input = {:union, [:integer, {:union, [:list, :atom, :integer]}]}
expected = {:union, [:atom, :integer, :list]}
assert_spec_normalized(expected, input)
end
@doc "Tests `A | none` simplifies to `A`, as `:none` is the identity for union."
test "simplifies a union with :none (A | none -> A)" do
assert_spec_normalized(:atom, {:union, [:atom, :none]})
end
@doc "Tests `A | any` simplifies to `any`, as `:any` is the absorbing element for union."
test "simplifies a union with :any (A | any -> any)" do
assert_spec_normalized(:any, {:union, [:atom, :any]})
end
@doc "An empty set of types is logically equivalent to `:none`."
test "an empty union simplifies to :none" do
assert_spec_normalized(:none, {:union, []})
end
@doc "A union containing just one type should simplify to that type itself."
test "a union of a single element simplifies to the element itself" do
assert_spec_normalized(:atom, {:union, [:atom]})
end
end
describe "Tdd.TypeSpec.normalize/1: Intersection Normalization" do
@doc "Tests that intersections are canonicalized like unions (flatten, sort, unique)."
test "flattens, sorts, and uniques members" do
input = {:intersect, [:integer, {:intersect, [:list, :atom, :integer]}]}
expected = {:intersect, [:atom, :integer, :list]}
assert_spec_normalized(expected, input)
end
@doc "Tests `A & any` simplifies to `A`, as `:any` is the identity for intersection."
test "simplifies an intersection with :any (A & any -> A)" do
assert_spec_normalized(:atom, {:intersect, [:atom, :any]})
end
@doc "Tests `A & none` simplifies to `none`, as `:none` is the absorbing element."
test "simplifies an intersection with :none (A & none -> none)" do
assert_spec_normalized(:none, {:intersect, [:atom, :none]})
end
@doc "An intersection of zero types is logically `any` (no constraints)."
test "an empty intersection simplifies to :any" do
assert_spec_normalized(:any, {:intersect, []})
end
@doc "An intersection of one type simplifies to the type itself."
test "an intersection of a single element simplifies to the element itself" do
assert_spec_normalized(:atom, {:intersect, [:atom]})
end
end
describe "Tdd.TypeSpec.normalize/1: Subtype Reduction" do
@doc """
Tests a key simplification: if a union contains a type and its own subtype,
the subtype is redundant and should be removed. E.g., `(1 | integer)` is just `integer`.
Here, `:foo` and `:bar` are subtypes of `:atom`, so the union simplifies to `:atom`.
"""
test "(:foo | :bar | atom) simplifies to atom" do
input = {:union, [{:literal, :foo}, {:literal, :bar}, :atom]}
expected = :atom
assert_spec_normalized(expected, input)
end
end
describe "Tdd.TypeSpec: Advanced Normalization (μ, Λ, Apply)" do
@doc """
Tests alpha-conversion for recursive types. The bound variable name (`:X`)
should be renamed to a canonical name (`:m_var0`) to ensure structural equality
regardless of the name chosen by the user.
"""
test "basic alpha-conversion for μ-variable" do
input = {:mu, :X, {:type_var, :X}}
expected = {:mu, :m_var0, {:type_var, :m_var0}}
assert_spec_normalized(expected, input)
end
@doc """
Tests that the syntactic sugar `{:list_of, T}` is correctly desugared into
its underlying recursive definition: `μT.[] | cons(T, μT)`.
"""
test "list_of(integer) normalizes to a μ-expression with canonical var" do
input = {:list_of, :integer}
expected =
{:mu, :m_var0, {:union, [{:literal, []}, {:cons, :integer, {:type_var, :m_var0}}]}}
assert_spec_normalized(expected, input)
end
@doc """
Tests beta-reduction (function application). Applying the identity function
`(ΛT.T)` to `integer` should result in `integer`.
"""
test "simple application: (ΛT.T) integer -> integer" do
input = {:type_apply, {:type_lambda, [:T], {:type_var, :T}}, [:integer]}
expected = :integer
assert_spec_normalized(expected, input)
end
@doc """
Tests a more complex beta-reduction. Applying a list constructor lambda
to `:atom` should produce the normalized form of `list_of(atom)`.
"""
test "application with structure: (ΛT. list_of(T)) atom -> list_of(atom) (normalized form)" do
input = {:type_apply, {:type_lambda, [:T], {:list_of, {:type_var, :T}}}, [:atom]}
expected = {:mu, :m_var0, {:union, [{:literal, []}, {:cons, :atom, {:type_var, :m_var0}}]}}
assert_spec_normalized(expected, input)
end
end
# ---
# Tdd.Consistency.Engine Tests
# These tests validate the logic that detects contradictions in a set of predicate assumptions.
# ---
describe "Tdd.Consistency.Engine: Logic for detecting contradictions" do
# This setup is local to this describe block, which is fine.
setup do
Tdd.Store.init()
id_atom = Tdd.Compiler.spec_to_id(:atom)
%{id_atom: id_atom}
end
@doc "An empty set of assumptions has no contradictions."
test "an empty assumption map is consistent" do
assert Engine.check(%{}) == :consistent
end
@doc """
Tests that the engine uses predicate traits to find implied contradictions.
`v_atom_eq(:foo)` implies `v_is_atom()` is true, which contradicts the explicit
assumption that `v_is_atom()` is false.
"""
test "an implied contradiction is caught by expander" do
assumptions = %{Variable.v_atom_eq(:foo) => true, Variable.v_is_atom() => false}
assert Engine.check(assumptions) == :contradiction
end
@doc "A term cannot belong to two different primary types like :atom and :integer."
test "two primary types cannot both be true" do
assumptions = %{Variable.v_is_atom() => true, Variable.v_is_integer() => true}
assert Engine.check(assumptions) == :contradiction
end
@doc "A list cannot be empty and simultaneously have properties on its head (which wouldn't exist)."
test "a list cannot be empty and have a head property", %{id_atom: id_atom} do
assumptions = %{
Variable.v_list_is_empty() => true,
Variable.v_list_head_pred(id_atom) => true
}
assert Engine.check(assumptions) == :contradiction
end
@doc "Tests for logical contradictions in integer ranges."
test "int < 10 AND int > 20 is a contradiction" do
assumptions = %{
Variable.v_int_lt(10) => true,
Variable.v_int_gt(20) => true
}
assert Engine.check(assumptions) == :contradiction
end
end
# ---
# Compiler & Algo Integration Tests
# These tests ensure that the high-level public APIs (`is_subtype`, `spec_to_id`)
# work correctly by integrating the compiler and the graph algorithms.
# ---
describe "Tdd.Compiler and Tdd.Algo Integration: High-level API validation" do
@doc "Verifies semantic equivalence of types using TDD IDs. e.g., `atom & any` is the same type as `atom`."
test "basic equivalences" do
assert_equivalent_specs({:intersect, [:atom, :any]}, :atom)
assert_equivalent_specs({:union, [:atom, :none]}, :atom)
assert_equivalent_specs({:intersect, [:atom, :integer]}, :none)
end
@doc "Tests the main `is_subtype` public API for simple, non-recursive types."
test "basic subtyping" do
assert_subtype({:literal, :foo}, :atom)
refute_subtype(:atom, {:literal, :foo})
assert_subtype(:none, :atom)
assert_subtype(:atom, :any)
end
@doc "Tests that impossible type intersections compile to the `:none` (FALSE) node."
test "contradictions" do
assert Compiler.spec_to_id({:intersect, [:atom, :integer]}) == Store.false_node_id()
assert Compiler.spec_to_id({:intersect, [{:literal, :foo}, {:literal, :bar}]}) ==
Store.false_node_id()
end
end
# ---
# Tdd.Compiler Advanced Feature Tests
# These tests target the most complex features: recursive and polymorphic types.
# ---
describe "Tdd.Compiler: Advanced Features (μ, Λ, Apply)" do
test "list subtyping: list(int) <: list(any)" do
int_list = {:list_of, :integer}
any_list = {:list_of, :any}
assert_subtype(:integer, :any)
assert_subtype(int_list, any_list)
assert_subtype({:cons, {:literal, 1}, {:literal, []}}, int_list)
end
test "list subtyping: list(any) <! list(integer)" do
int_list = {:list_of, :integer}
any_list = {:list_of, :any}
assert_subtype(:integer, :any)
refute_subtype({:cons, {:literal, :a}, {:literal, []}}, int_list)
refute_subtype(any_list, int_list)
end
@doc "Tests that manually-defined recursive types (like a binary tree) can be compiled and checked correctly."
test "explicit μ-types" do
leaf_node = {:literal, :empty_tree}
tree_spec =
{:mu, :Tree,
{:union,
[
leaf_node,
{:tuple, [:atom, {:type_var, :Tree}, {:type_var, :Tree}]}
]}}
# Test that it compiles to a valid TDD ID
assert is_integer(Compiler.spec_to_id(tree_spec))
# Test that an instance of the tree is correctly identified as a subtype
simple_tree_instance = {:tuple, [{:literal, :a}, leaf_node, leaf_node]}
assert_subtype(simple_tree_instance, tree_spec)
end
@doc """
Tests that a polymorphic type created via lambda application is equivalent
to its manually specialized counterpart. e.g., `(List<T>)(int)` should be the
same as `List<int>`.
"""
test "polymorphism (Λ, Apply)" do
gen_list_lambda = {:type_lambda, [:Tparam], {:list_of, {:type_var, :Tparam}}}
list_of_int_from_apply = {:type_apply, gen_list_lambda, [:integer]}
int_list = {:list_of, :integer}
assert_equivalent_specs(list_of_int_from_apply, int_list)
end
test "negate list type" do
refute_subtype({:negation, {:list_of, :integer}}, :none)
end
end
describe "Logical Equivalences: De Morgan's Laws & Distributivity" do
@doc "Tests ¬(A B) <=> (¬A ∩ ¬B)"
test "De Morgan's Law for negated unions" do
# ¬(atom | integer)
spec1 = {:negation, {:union, [:atom, :integer]}}
# ¬atom & ¬integer
spec2 = {:intersect, [{:negation, :atom}, {:negation, :integer}]}
assert_equivalent_specs(spec1, spec2)
end
@doc "Tests ¬(A ∩ B) <=> (¬A ¬B)"
test "De Morgan's Law for negated intersections" do
# ¬(integer & atom)
spec1 = {:negation, {:intersect, [:integer, :atom]}}
# ¬integer | ¬atom
spec2 = {:union, [{:negation, :integer}, {:negation, :atom}]}
assert_equivalent_specs(spec1, spec2)
end
@doc "Tests A ∩ (B C) <=> (A ∩ B) (A ∩ C)"
test "Distributive Law: intersection over union" do
# We'll use types where some intersections are meaningful and others are :none
# `integer` is a supertype of `integer` and `tuple`
# `(integer | tuple) & (atom | integer)`
spec1 =
{:intersect, [{:union, [:integer, :tuple]}, {:union, [:atom, :integer]}]}
# Should be equivalent to:
# (integer & atom) | (integer & integer) | (tuple & atom) | (tuple & integer)
# which simplifies to:
# :none | :integer | :none | :none
# which is just :integer
Debug.print_tdd_graph(Compiler.spec_to_id(spec1))
assert_equivalent_specs(spec1, :integer)
end
@doc "Tests A | (B & C) versus (A | B) & (A | C) -- they are NOT always equivalent"
test "subtyping with distributivity" do
# Let's test `A | (B & C) <: (A | B) & (A | C)`
spec_left = {:union, [:atom, {:intersect, [:integer, {:integer_range, 0, :pos_inf}]}]}
spec_right =
{:intersect,
[{:union, [:atom, :integer]}, {:union, [:atom, {:integer_range, 0, :pos_inf}]}]}
# `spec_left` normalizes to `atom | positive_integer`
# `spec_right` normalizes to `(atom | integer) & (atom | positive_integer)`
# Since `atom | positive_integer` is a subtype of `atom | integer`, the intersection on the right
# simplifies to `atom | positive_integer`. Therefore, they are equivalent in this case.
assert_equivalent_specs(spec_left, spec_right)
spec_sub = {:union, [:integer, {:intersect, [:atom, :tuple]}]}
spec_super = {:intersect, [{:union, [:integer, :atom]}, {:union, [:integer, :tuple]}]}
assert_subtype(spec_sub, spec_super)
end
end
# ---
# Advanced Negation and Contradiction Tests
# ---
describe "Tdd.Compiler: Advanced Negation and Contradiction" do
@doc "Tests that a type and its negation correctly partition the universal set (:any)."
test "A | ¬A <=> :any" do
spec = {:union, [:integer, {:negation, :integer}]}
assert_equivalent_specs(spec, :any)
end
@doc "Tests that a subtype relationship holds against a negated supertype"
test "subtyping with negation: integer <: ¬atom" do
# An integer is never an atom, so it should be a subtype of "not atom".
assert_subtype(:integer, {:negation, :atom})
# The reverse is not true: `not atom` includes lists, integers, etc.
refute_subtype({:negation, :atom}, :integer)
end
@doc "A complex type that should resolve to :none"
test "complex contradiction involving subtype logic" do
# This is `(integer and not a integer)`. Since all integers are integers, this is impossible.
spec = {:intersect, [:integer, {:negation, :integer}]}
assert_equivalent_specs(spec, :none)
end
@doc "The type 'list of integers that are also atoms' should be impossible"
test "contradiction inside a recursive type structure" do
# list_of(integer & atom) is list_of(:none).
# A list of :none can only be the empty list, as no element can ever exist.
spec = {:list_of, {:intersect, [:integer, :atom]}}
Tdd.Store.init()
Debug.print_tdd_graph(Compiler.spec_to_id(spec))
Debug.print_tdd_graph(Compiler.spec_to_id({:literal, []}))
assert_equivalent_specs(spec, {:literal, []})
end
end
# ---
# Complex Recursive (μ) and Polymorphic (Λ, Apply) Tests
# These tests probe the interaction between recursion, polymorphism, and subtyping.
# ---
describe "Tdd.Compiler: Interplay of μ, Λ, and Subtyping" do
@doc "Tests subtyping between structurally similar, but not identical, recursive types."
test "subtyping of structural recursive types: list_of(1|2) <: list_of(integer)" do
list_of_1_or_2 = {:list_of, {:union, [{:literal, 1}, {:literal, 2}]}}
list_of_integer = {:list_of, :integer}
assert_subtype(list_of_1_or_2, list_of_integer)
refute_subtype(list_of_integer, list_of_1_or_2)
end
@doc "Tests a non-sensical recursion that should simplify away"
test "μ-types that should normalize to a non-recursive form" do
# A type defined as "a cons of an atom and itself, or none" should be equivalent to `list_of(atom)`.
# This tests if our manual mu-calculus definition matches the syntactic sugar.
manual_list_of_atom = {:mu, :L, {:union, [{:literal, []}, {:cons, :atom, {:type_var, :L}}]}}
sugar_list_of_atom = {:list_of, :atom}
assert_equivalent_specs(manual_list_of_atom, sugar_list_of_atom)
end
test "infer termination of recursive types" do
spec = {:mu, :X, {:union, [{:type_var, :X}, :integer]}}
assert_equivalent_specs(spec, :integer)
end
@doc "Tests polymorphism with multiple type parameters."
test "lambda with multiple parameters: (Λ(A,B). {A,B})(int, atom)" do
# λ(A, B). {A, B}
map_constructor = {:type_lambda, [:A, :B], {:tuple, [{:type_var, :A}, {:type_var, :B}]}}
# Apply it to integer and atom
applied_spec = {:type_apply, map_constructor, [:integer, :atom]}
# The result should be the concrete type {integer, atom}
expected_spec = {:tuple, [:integer, :atom]}
assert_equivalent_specs(applied_spec, expected_spec)
end
@doc "Tests that applying a lambda to a complex recursive type works correctly"
test "applying a polymorphic list constructor to a tree type" do
# First, define our tree type from a previous test
leaf_node = {:literal, :empty_tree}
tree_spec =
{:mu, :Tree,
{:union,
[
leaf_node,
{:tuple, [:atom, {:type_var, :Tree}, {:type_var, :Tree}]}
]}}
# Now, apply the generic list lambda to this tree_spec
gen_list_lambda = {:type_lambda, [:T], {:list_of, {:type_var, :T}}}
list_of_trees = {:type_apply, gen_list_lambda, [tree_spec]}
# Check if a concrete instance is a subtype of this new complex type
tree_instance_1 = {:tuple, [{:literal, :a}, leaf_node, leaf_node]}
tree_instance_2 = {:tuple, [{:literal, :b}, leaf_node, leaf_node]}
list_of_trees_instance = {:cons, tree_instance_1, {:cons, tree_instance_2, {:literal, []}}}
assert_subtype(list_of_trees_instance, list_of_trees)
end
# failing
@doc "An 'ill-formed' recursive type that should evaluate to :none"
test "μ-types that are self-contradictory should be :none" do
# A type defined as `μX. cons(integer, X)`. This describes an infinite list
# of integers with no base case (`[]`). No finite value can ever satisfy this type.
infinite_list = {:mu, :X, {:cons, :integer, {:type_var, :X}}}
assert_equivalent_specs(infinite_list, :none)
end
end
end

View File

@ -1,77 +1,77 @@
defmodule Tilly.BDD.AtomBoolOpsTest do
use ExUnit.Case, async: true
alias Tilly.BDD.AtomBoolOps
describe "compare_elements/2" do
test "correctly compares atoms" do
assert AtomBoolOps.compare_elements(:apple, :banana) == :lt
assert AtomBoolOps.compare_elements(:banana, :apple) == :gt
assert AtomBoolOps.compare_elements(:cherry, :cherry) == :eq
end
end
describe "equal_element?/2" do
test "correctly checks atom equality" do
assert AtomBoolOps.equal_element?(:apple, :apple) == true
assert AtomBoolOps.equal_element?(:apple, :banana) == false
end
end
describe "hash_element/1" do
test "hashes atoms consistently" do
assert is_integer(AtomBoolOps.hash_element(:foo))
assert AtomBoolOps.hash_element(:foo) == AtomBoolOps.hash_element(:foo)
assert AtomBoolOps.hash_element(:foo) != AtomBoolOps.hash_element(:bar)
end
end
describe "leaf operations" do
test "empty_leaf/0 returns false" do
assert AtomBoolOps.empty_leaf() == false
end
test "any_leaf/0 returns true" do
assert AtomBoolOps.any_leaf() == true
end
test "is_empty_leaf?/1" do
assert AtomBoolOps.is_empty_leaf?(false) == true
assert AtomBoolOps.is_empty_leaf?(true) == false
end
test "union_leaves/3" do
assert AtomBoolOps.union_leaves(%{}, false, false) == false
assert AtomBoolOps.union_leaves(%{}, true, false) == true
assert AtomBoolOps.union_leaves(%{}, false, true) == true
assert AtomBoolOps.union_leaves(%{}, true, true) == true
end
test "intersection_leaves/3" do
assert AtomBoolOps.intersection_leaves(%{}, false, false) == false
assert AtomBoolOps.intersection_leaves(%{}, true, false) == false
assert AtomBoolOps.intersection_leaves(%{}, false, true) == false
assert AtomBoolOps.intersection_leaves(%{}, true, true) == true
end
test "negation_leaf/2" do
assert AtomBoolOps.negation_leaf(%{}, false) == true
assert AtomBoolOps.negation_leaf(%{}, true) == false
end
end
describe "test_leaf_value/1" do
test "returns :empty for false" do
assert AtomBoolOps.test_leaf_value(false) == :empty
end
test "returns :full for true" do
assert AtomBoolOps.test_leaf_value(true) == :full
end
# Conceptual test if atoms had other leaf values
# test "returns :other for other values" do
# assert AtomBoolOps.test_leaf_value(:some_other_leaf_marker) == :other
# end
end
end
# defmodule Tilly.BDD.AtomBoolOpsTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD.AtomBoolOps
#
# describe "compare_elements/2" do
# test "correctly compares atoms" do
# assert AtomBoolOps.compare_elements(:apple, :banana) == :lt
# assert AtomBoolOps.compare_elements(:banana, :apple) == :gt
# assert AtomBoolOps.compare_elements(:cherry, :cherry) == :eq
# end
# end
#
# describe "equal_element?/2" do
# test "correctly checks atom equality" do
# assert AtomBoolOps.equal_element?(:apple, :apple) == true
# assert AtomBoolOps.equal_element?(:apple, :banana) == false
# end
# end
#
# describe "hash_element/1" do
# test "hashes atoms consistently" do
# assert is_integer(AtomBoolOps.hash_element(:foo))
# assert AtomBoolOps.hash_element(:foo) == AtomBoolOps.hash_element(:foo)
# assert AtomBoolOps.hash_element(:foo) != AtomBoolOps.hash_element(:bar)
# end
# end
#
# describe "leaf operations" do
# test "empty_leaf/0 returns false" do
# assert AtomBoolOps.empty_leaf() == false
# end
#
# test "any_leaf/0 returns true" do
# assert AtomBoolOps.any_leaf() == true
# end
#
# test "is_empty_leaf?/1" do
# assert AtomBoolOps.is_empty_leaf?(false) == true
# assert AtomBoolOps.is_empty_leaf?(true) == false
# end
#
# test "union_leaves/3" do
# assert AtomBoolOps.union_leaves(%{}, false, false) == false
# assert AtomBoolOps.union_leaves(%{}, true, false) == true
# assert AtomBoolOps.union_leaves(%{}, false, true) == true
# assert AtomBoolOps.union_leaves(%{}, true, true) == true
# end
#
# test "intersection_leaves/3" do
# assert AtomBoolOps.intersection_leaves(%{}, false, false) == false
# assert AtomBoolOps.intersection_leaves(%{}, true, false) == false
# assert AtomBoolOps.intersection_leaves(%{}, false, true) == false
# assert AtomBoolOps.intersection_leaves(%{}, true, true) == true
# end
#
# test "negation_leaf/2" do
# assert AtomBoolOps.negation_leaf(%{}, false) == true
# assert AtomBoolOps.negation_leaf(%{}, true) == false
# end
# end
#
# describe "test_leaf_value/1" do
# test "returns :empty for false" do
# assert AtomBoolOps.test_leaf_value(false) == :empty
# end
#
# test "returns :full for true" do
# assert AtomBoolOps.test_leaf_value(true) == :full
# end
#
# # Conceptual test if atoms had other leaf values
# # test "returns :other for other values" do
# # assert AtomBoolOps.test_leaf_value(:some_other_leaf_marker) == :other
# # end
# end
# end

View File

@ -1,67 +1,67 @@
defmodule Tilly.BDD.IntegerBoolOpsTest do
use ExUnit.Case, async: true
alias Tilly.BDD.IntegerBoolOps
describe "compare_elements/2" do
test "correctly compares integers" do
assert IntegerBoolOps.compare_elements(1, 2) == :lt
assert IntegerBoolOps.compare_elements(2, 1) == :gt
assert IntegerBoolOps.compare_elements(1, 1) == :eq
end
end
describe "equal_element?/2" do
test "correctly checks equality of integers" do
assert IntegerBoolOps.equal_element?(1, 1) == true
assert IntegerBoolOps.equal_element?(1, 2) == false
end
end
describe "hash_element/1" do
test "returns the integer itself as hash" do
assert IntegerBoolOps.hash_element(123) == 123
assert IntegerBoolOps.hash_element(-5) == -5
end
end
describe "leaf operations" do
test "empty_leaf/0 returns false" do
assert IntegerBoolOps.empty_leaf() == false
end
test "any_leaf/0 returns true" do
assert IntegerBoolOps.any_leaf() == true
end
test "is_empty_leaf?/1" do
assert IntegerBoolOps.is_empty_leaf?(false) == true
assert IntegerBoolOps.is_empty_leaf?(true) == false
end
end
describe "union_leaves/3" do
test "computes boolean OR" do
assert IntegerBoolOps.union_leaves(%{}, true, true) == true
assert IntegerBoolOps.union_leaves(%{}, true, false) == true
assert IntegerBoolOps.union_leaves(%{}, false, true) == true
assert IntegerBoolOps.union_leaves(%{}, false, false) == false
end
end
describe "intersection_leaves/3" do
test "computes boolean AND" do
assert IntegerBoolOps.intersection_leaves(%{}, true, true) == true
assert IntegerBoolOps.intersection_leaves(%{}, true, false) == false
assert IntegerBoolOps.intersection_leaves(%{}, false, true) == false
assert IntegerBoolOps.intersection_leaves(%{}, false, false) == false
end
end
describe "negation_leaf/2" do
test "computes boolean NOT" do
assert IntegerBoolOps.negation_leaf(%{}, true) == false
assert IntegerBoolOps.negation_leaf(%{}, false) == true
end
end
end
# defmodule Tilly.BDD.IntegerBoolOpsTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD.IntegerBoolOps
#
# describe "compare_elements/2" do
# test "correctly compares integers" do
# assert IntegerBoolOps.compare_elements(1, 2) == :lt
# assert IntegerBoolOps.compare_elements(2, 1) == :gt
# assert IntegerBoolOps.compare_elements(1, 1) == :eq
# end
# end
#
# describe "equal_element?/2" do
# test "correctly checks equality of integers" do
# assert IntegerBoolOps.equal_element?(1, 1) == true
# assert IntegerBoolOps.equal_element?(1, 2) == false
# end
# end
#
# describe "hash_element/1" do
# test "returns the integer itself as hash" do
# assert IntegerBoolOps.hash_element(123) == 123
# assert IntegerBoolOps.hash_element(-5) == -5
# end
# end
#
# describe "leaf operations" do
# test "empty_leaf/0 returns false" do
# assert IntegerBoolOps.empty_leaf() == false
# end
#
# test "any_leaf/0 returns true" do
# assert IntegerBoolOps.any_leaf() == true
# end
#
# test "is_empty_leaf?/1" do
# assert IntegerBoolOps.is_empty_leaf?(false) == true
# assert IntegerBoolOps.is_empty_leaf?(true) == false
# end
# end
#
# describe "union_leaves/3" do
# test "computes boolean OR" do
# assert IntegerBoolOps.union_leaves(%{}, true, true) == true
# assert IntegerBoolOps.union_leaves(%{}, true, false) == true
# assert IntegerBoolOps.union_leaves(%{}, false, true) == true
# assert IntegerBoolOps.union_leaves(%{}, false, false) == false
# end
# end
#
# describe "intersection_leaves/3" do
# test "computes boolean AND" do
# assert IntegerBoolOps.intersection_leaves(%{}, true, true) == true
# assert IntegerBoolOps.intersection_leaves(%{}, true, false) == false
# assert IntegerBoolOps.intersection_leaves(%{}, false, true) == false
# assert IntegerBoolOps.intersection_leaves(%{}, false, false) == false
# end
# end
#
# describe "negation_leaf/2" do
# test "computes boolean NOT" do
# assert IntegerBoolOps.negation_leaf(%{}, true) == false
# assert IntegerBoolOps.negation_leaf(%{}, false) == true
# end
# end
# end

View File

@ -1,123 +1,123 @@
defmodule Tilly.BDD.NodeTest do
use ExUnit.Case, async: true
alias Tilly.BDD.Node
describe "Smart Constructors" do
test "mk_true/0 returns true" do
assert Node.mk_true() == true
end
test "mk_false/0 returns false" do
assert Node.mk_false() == false
end
test "mk_leaf/1 creates a leaf node" do
assert Node.mk_leaf(:some_value) == {:leaf, :some_value}
assert Node.mk_leaf(123) == {:leaf, 123}
end
test "mk_split/4 creates a split node" do
assert Node.mk_split(:el, :p_id, :i_id, :n_id) == {:split, :el, :p_id, :i_id, :n_id}
end
end
describe "Predicates" do
setup do
%{
true_node: Node.mk_true(),
false_node: Node.mk_false(),
leaf_node: Node.mk_leaf("data"),
split_node: Node.mk_split(1, 2, 3, 4)
}
end
test "is_true?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
assert Node.is_true?(t) == true
assert Node.is_true?(f) == false
assert Node.is_true?(l) == false
assert Node.is_true?(s) == false
end
test "is_false?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
assert Node.is_false?(f) == true
assert Node.is_false?(t) == false
assert Node.is_false?(l) == false
assert Node.is_false?(s) == false
end
test "is_leaf?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
assert Node.is_leaf?(l) == true
assert Node.is_leaf?(t) == false
assert Node.is_leaf?(f) == false
assert Node.is_leaf?(s) == false
end
test "is_split?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
assert Node.is_split?(s) == true
assert Node.is_split?(t) == false
assert Node.is_split?(f) == false
assert Node.is_split?(l) == false
end
end
describe "Accessors" do
setup do
%{
leaf_node: Node.mk_leaf("leaf_data"),
split_node: Node.mk_split(:elem_id, :pos_child, :ign_child, :neg_child)
}
end
test "value/1 for leaf node", %{leaf_node: l} do
assert Node.value(l) == "leaf_data"
end
test "value/1 raises for non-leaf node" do
assert_raise ArgumentError, ~r/Not a leaf node/, fn -> Node.value(Node.mk_true()) end
assert_raise ArgumentError, ~r/Not a leaf node/, fn ->
Node.value(Node.mk_split(1, 2, 3, 4))
end
end
test "element/1 for split node", %{split_node: s} do
assert Node.element(s) == :elem_id
end
test "element/1 raises for non-split node" do
assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_true()) end
assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_leaf(1)) end
end
test "positive_child/1 for split node", %{split_node: s} do
assert Node.positive_child(s) == :pos_child
end
test "positive_child/1 raises for non-split node" do
assert_raise ArgumentError, ~r/Not a split node/, fn ->
Node.positive_child(Node.mk_leaf(1))
end
end
test "ignore_child/1 for split node", %{split_node: s} do
assert Node.ignore_child(s) == :ign_child
end
test "ignore_child/1 raises for non-split node" do
assert_raise ArgumentError, ~r/Not a split node/, fn ->
Node.ignore_child(Node.mk_leaf(1))
end
end
test "negative_child/1 for split node", %{split_node: s} do
assert Node.negative_child(s) == :neg_child
end
test "negative_child/1 raises for non-split node" do
assert_raise ArgumentError, ~r/Not a split node/, fn ->
Node.negative_child(Node.mk_leaf(1))
end
end
end
end
# defmodule Tilly.BDD.NodeTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD.Node
#
# describe "Smart Constructors" do
# test "mk_true/0 returns true" do
# assert Node.mk_true() == true
# end
#
# test "mk_false/0 returns false" do
# assert Node.mk_false() == false
# end
#
# test "mk_leaf/1 creates a leaf node" do
# assert Node.mk_leaf(:some_value) == {:leaf, :some_value}
# assert Node.mk_leaf(123) == {:leaf, 123}
# end
#
# test "mk_split/4 creates a split node" do
# assert Node.mk_split(:el, :p_id, :i_id, :n_id) == {:split, :el, :p_id, :i_id, :n_id}
# end
# end
#
# describe "Predicates" do
# setup do
# %{
# true_node: Node.mk_true(),
# false_node: Node.mk_false(),
# leaf_node: Node.mk_leaf("data"),
# split_node: Node.mk_split(1, 2, 3, 4)
# }
# end
#
# test "is_true?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
# assert Node.is_true?(t) == true
# assert Node.is_true?(f) == false
# assert Node.is_true?(l) == false
# assert Node.is_true?(s) == false
# end
#
# test "is_false?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
# assert Node.is_false?(f) == true
# assert Node.is_false?(t) == false
# assert Node.is_false?(l) == false
# assert Node.is_false?(s) == false
# end
#
# test "is_leaf?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
# assert Node.is_leaf?(l) == true
# assert Node.is_leaf?(t) == false
# assert Node.is_leaf?(f) == false
# assert Node.is_leaf?(s) == false
# end
#
# test "is_split?/1", %{true_node: t, false_node: f, leaf_node: l, split_node: s} do
# assert Node.is_split?(s) == true
# assert Node.is_split?(t) == false
# assert Node.is_split?(f) == false
# assert Node.is_split?(l) == false
# end
# end
#
# describe "Accessors" do
# setup do
# %{
# leaf_node: Node.mk_leaf("leaf_data"),
# split_node: Node.mk_split(:elem_id, :pos_child, :ign_child, :neg_child)
# }
# end
#
# test "value/1 for leaf node", %{leaf_node: l} do
# assert Node.value(l) == "leaf_data"
# end
#
# test "value/1 raises for non-leaf node" do
# assert_raise ArgumentError, ~r/Not a leaf node/, fn -> Node.value(Node.mk_true()) end
#
# assert_raise ArgumentError, ~r/Not a leaf node/, fn ->
# Node.value(Node.mk_split(1, 2, 3, 4))
# end
# end
#
# test "element/1 for split node", %{split_node: s} do
# assert Node.element(s) == :elem_id
# end
#
# test "element/1 raises for non-split node" do
# assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_true()) end
# assert_raise ArgumentError, ~r/Not a split node/, fn -> Node.element(Node.mk_leaf(1)) end
# end
#
# test "positive_child/1 for split node", %{split_node: s} do
# assert Node.positive_child(s) == :pos_child
# end
#
# test "positive_child/1 raises for non-split node" do
# assert_raise ArgumentError, ~r/Not a split node/, fn ->
# Node.positive_child(Node.mk_leaf(1))
# end
# end
#
# test "ignore_child/1 for split node", %{split_node: s} do
# assert Node.ignore_child(s) == :ign_child
# end
#
# test "ignore_child/1 raises for non-split node" do
# assert_raise ArgumentError, ~r/Not a split node/, fn ->
# Node.ignore_child(Node.mk_leaf(1))
# end
# end
#
# test "negative_child/1 for split node", %{split_node: s} do
# assert Node.negative_child(s) == :neg_child
# end
#
# test "negative_child/1 raises for non-split node" do
# assert_raise ArgumentError, ~r/Not a split node/, fn ->
# Node.negative_child(Node.mk_leaf(1))
# end
# end
# end
# end

View File

@ -1,191 +1,191 @@
defmodule Tilly.BDD.OpsTest do
use ExUnit.Case, async: true
alias Tilly.BDD
alias Tilly.BDD.Node
alias Tilly.BDD.Ops
alias Tilly.BDD.IntegerBoolOps # Using a concrete ops_module for testing
setup do
typing_ctx = BDD.init_bdd_store(%{})
# Pre-intern some common elements for tests if needed, e.g., integers
# For now, rely on ops to intern elements as they are used.
%{initial_ctx: typing_ctx}
end
describe "leaf/3" do
test "interning an empty leaf value returns predefined false_id", %{initial_ctx: ctx} do
{new_ctx, node_id} = Ops.leaf(ctx, false, IntegerBoolOps)
assert node_id == BDD.false_node_id()
assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache # Cache not used for this path
end
test "interning a full leaf value returns predefined true_id", %{initial_ctx: ctx} do
{new_ctx, node_id} = Ops.leaf(ctx, true, IntegerBoolOps)
assert node_id == BDD.true_node_id()
assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache
end
@tag :skip
test "interning a new 'other' leaf value returns a new ID", %{initial_ctx: _ctx} do
# Assuming IntegerBoolOps.test_leaf_value/1 would return :other for non-booleans
# For this test, we'd need an ops_module where e.g. an integer is an :other leaf.
# Let's simulate with a mock or by extending IntegerBoolOps if it were not read-only.
# For now, this test is conceptual for boolean leaves.
# If IntegerBoolOps was extended:
# defmodule MockIntegerOps do
# defdelegate compare_elements(e1, e2), to: IntegerBoolOps
# defdelegate equal_element?(e1, e2), to: IntegerBoolOps
# # ... other delegates
# def test_leaf_value(10), do: :other # Treat 10 as a specific leaf
# def test_leaf_value(true), do: :full
# def test_leaf_value(false), do: :empty
# end
# {ctx_after_intern, node_id} = Ops.leaf(ctx, 10, MockIntegerOps)
# assert node_id != BDD.true_node_id() and node_id != BDD.false_node_id()
# assert BDD.get_node_data(ctx_after_intern, node_id).structure == Node.mk_leaf(10)
# Placeholder for more complex leaf types. Test is skipped.
end
end
describe "split/6 basic simplifications" do
test "if i_id is true, returns true_id", %{initial_ctx: ctx} do
{_p_ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
{_n_ctx, n_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
true_id = BDD.true_node_id()
{new_ctx, result_id} = Ops.split(ctx, 10, p_id, true_id, n_id, IntegerBoolOps)
assert result_id == true_id
assert new_ctx == ctx # No new nodes or cache entries expected for this rule
end
test "if p_id == n_id and p_id == i_id, returns p_id", %{initial_ctx: ctx} do
{ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps) # some leaf
i_id = p_id
n_id = p_id
{_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
assert result_id == p_id
# Cache might be touched if union_bdds was called, but this rule is direct.
# For p_id == i_id, it's direct.
end
test "if p_id == n_id and p_id != i_id, returns union(p_id, i_id)", %{initial_ctx: ctx} do
{ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps)
{ctx, i_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(true), IntegerBoolOps) # different leaf
n_id = p_id
# Expected union of p_id (false_leaf) and i_id (true_leaf) is true_id
# This relies on union_bdds working.
{_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
expected_union_id = BDD.true_node_id() # Union of false_leaf and true_leaf
assert result_id == expected_union_id
end
test "interns a new split node if no simplification rule applies", %{initial_ctx: ctx} do
{ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
{ctx, i_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
{ctx, n_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id (different from p_id)
element = 20
{new_ctx, split_node_id} = Ops.split(ctx, element, p_id, i_id, n_id, IntegerBoolOps)
assert split_node_id != p_id and split_node_id != i_id and split_node_id != n_id
assert split_node_id != BDD.true_node_id() and split_node_id != BDD.false_node_id()
node_data = BDD.get_node_data(new_ctx, split_node_id)
assert node_data.structure == Node.mk_split(element, p_id, i_id, n_id)
assert node_data.ops_module == IntegerBoolOps
assert new_ctx.bdd_store.next_node_id > ctx.bdd_store.next_node_id
end
end
describe "union_bdds/3" do
test "A U A = A", %{initial_ctx: ctx} do
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
{new_ctx, result_id} = Ops.union_bdds(ctx, a_id, a_id)
assert result_id == a_id
assert Map.has_key?(new_ctx.bdd_store.ops_cache, {:union, a_id, a_id})
end
test "A U True = True", %{initial_ctx: ctx} do
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
true_id = BDD.true_node_id()
{_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, true_id)
assert result_id == true_id
end
test "A U False = A", %{initial_ctx: ctx} do
{ctx, a_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id
false_id = BDD.false_node_id()
{_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, false_id)
assert result_id == a_id
end
test "union of two distinct leaves", %{initial_ctx: ctx} do
# leaf(false) U leaf(true) = leaf(true OR false) = leaf(true) -> true_node_id
{ctx, leaf_false_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, leaf_true_id} = Ops.leaf(ctx, true, IntegerBoolOps) # This is BDD.true_node_id()
{_new_ctx, result_id} = Ops.union_bdds(ctx, leaf_false_id, leaf_true_id)
assert result_id == BDD.true_node_id()
end
test "union of two simple split nodes with same element", %{initial_ctx: ctx} do
# BDD1: split(10, True, False, False)
# BDD2: split(10, False, True, False)
# Union: split(10, True U False, False U True, False U False)
# = split(10, True, True, False)
true_id = BDD.true_node_id()
false_id = BDD.false_node_id()
{ctx, bdd1_id} = Ops.split(ctx, 10, true_id, false_id, false_id, IntegerBoolOps)
{ctx, bdd2_id} = Ops.split(ctx, 10, false_id, true_id, false_id, IntegerBoolOps)
{final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
# Expected structure
{_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, true_id, true_id, false_id, IntegerBoolOps)
assert union_id == expected_bdd_id
end
test "union of two simple split nodes with different elements (x1 < x2)", %{initial_ctx: ctx} do
# BDD1: split(10, True, False, False)
# BDD2: split(20, False, True, False)
# Union (x1 < x2): split(10, p1, i1 U BDD2, n1)
# = split(10, True, False U BDD2, False)
# = split(10, True, BDD2, False)
{ctx, bdd1_p1_id} = Ops.leaf(ctx, true, IntegerBoolOps)
{ctx, bdd1_i1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, bdd1_n1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, bdd1_id} = Ops.split(ctx, 10, bdd1_p1_id, bdd1_i1_id, bdd1_n1_id, IntegerBoolOps)
{ctx, bdd2_p2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, bdd2_i2_id} = Ops.leaf(ctx, true, IntegerBoolOps)
{ctx, bdd2_n2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, bdd2_id} = Ops.split(ctx, 20, bdd2_p2_id, bdd2_i2_id, bdd2_n2_id, IntegerBoolOps)
{final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
# Expected structure: split(10, True, BDD2, False)
{_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, bdd1_p1_id, bdd2_id, bdd1_n1_id, IntegerBoolOps)
assert union_id == expected_bdd_id
end
test "uses cache for repeated union operations", %{initial_ctx: ctx} do
{ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
{ctx, b_id} = Ops.leaf(ctx, true, IntegerBoolOps)
{ctx_after_first_union, _result1_id} = Ops.union_bdds(ctx, a_id, b_id)
cache_after_first = ctx_after_first_union.bdd_store.ops_cache
{ctx_after_second_union, _result2_id} = Ops.union_bdds(ctx_after_first_union, a_id, b_id)
# The BDD store itself (nodes, next_id) should not change on a cache hit.
# The ops_cache map reference will be the same if the result was cached.
assert ctx_after_second_union.bdd_store.ops_cache == cache_after_first
assert ctx_after_second_union.bdd_store.next_node_id == ctx_after_first_union.bdd_store.next_node_id
end
end
end
# defmodule Tilly.BDD.OpsTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD
# alias Tilly.BDD.Node
# alias Tilly.BDD.Ops
# alias Tilly.BDD.IntegerBoolOps # Using a concrete ops_module for testing
#
# setup do
# typing_ctx = BDD.init_bdd_store(%{})
# # Pre-intern some common elements for tests if needed, e.g., integers
# # For now, rely on ops to intern elements as they are used.
# %{initial_ctx: typing_ctx}
# end
#
# describe "leaf/3" do
# test "interning an empty leaf value returns predefined false_id", %{initial_ctx: ctx} do
# {new_ctx, node_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# assert node_id == BDD.false_node_id()
# assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache # Cache not used for this path
# end
#
# test "interning a full leaf value returns predefined true_id", %{initial_ctx: ctx} do
# {new_ctx, node_id} = Ops.leaf(ctx, true, IntegerBoolOps)
# assert node_id == BDD.true_node_id()
# assert new_ctx.bdd_store.ops_cache == ctx.bdd_store.ops_cache
# end
#
# @tag :skip
# test "interning a new 'other' leaf value returns a new ID", %{initial_ctx: _ctx} do
# # Assuming IntegerBoolOps.test_leaf_value/1 would return :other for non-booleans
# # For this test, we'd need an ops_module where e.g. an integer is an :other leaf.
# # Let's simulate with a mock or by extending IntegerBoolOps if it were not read-only.
# # For now, this test is conceptual for boolean leaves.
# # If IntegerBoolOps was extended:
# # defmodule MockIntegerOps do
# # defdelegate compare_elements(e1, e2), to: IntegerBoolOps
# # defdelegate equal_element?(e1, e2), to: IntegerBoolOps
# # # ... other delegates
# # def test_leaf_value(10), do: :other # Treat 10 as a specific leaf
# # def test_leaf_value(true), do: :full
# # def test_leaf_value(false), do: :empty
# # end
# # {ctx_after_intern, node_id} = Ops.leaf(ctx, 10, MockIntegerOps)
# # assert node_id != BDD.true_node_id() and node_id != BDD.false_node_id()
# # assert BDD.get_node_data(ctx_after_intern, node_id).structure == Node.mk_leaf(10)
# # Placeholder for more complex leaf types. Test is skipped.
# end
# end
#
# describe "split/6 basic simplifications" do
# test "if i_id is true, returns true_id", %{initial_ctx: ctx} do
# {_p_ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
# {_n_ctx, n_id} = Ops.leaf(ctx, false, IntegerBoolOps) # dummy
# true_id = BDD.true_node_id()
#
# {new_ctx, result_id} = Ops.split(ctx, 10, p_id, true_id, n_id, IntegerBoolOps)
# assert result_id == true_id
# assert new_ctx == ctx # No new nodes or cache entries expected for this rule
# end
#
# test "if p_id == n_id and p_id == i_id, returns p_id", %{initial_ctx: ctx} do
# {ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps) # some leaf
# i_id = p_id
# n_id = p_id
#
# {_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
# assert result_id == p_id
# # Cache might be touched if union_bdds was called, but this rule is direct.
# # For p_id == i_id, it's direct.
# end
#
# test "if p_id == n_id and p_id != i_id, returns union(p_id, i_id)", %{initial_ctx: ctx} do
# {ctx, p_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(false), IntegerBoolOps)
# {ctx, i_id} = BDD.get_or_intern_node(ctx, Node.mk_leaf(true), IntegerBoolOps) # different leaf
# n_id = p_id
#
# # Expected union of p_id (false_leaf) and i_id (true_leaf) is true_id
# # This relies on union_bdds working.
# {_new_ctx, result_id} = Ops.split(ctx, 10, p_id, i_id, n_id, IntegerBoolOps)
# expected_union_id = BDD.true_node_id() # Union of false_leaf and true_leaf
# assert result_id == expected_union_id
# end
#
# test "interns a new split node if no simplification rule applies", %{initial_ctx: ctx} do
# {ctx, p_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
# {ctx, i_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
# {ctx, n_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id (different from p_id)
#
# element = 20
# {new_ctx, split_node_id} = Ops.split(ctx, element, p_id, i_id, n_id, IntegerBoolOps)
#
# assert split_node_id != p_id and split_node_id != i_id and split_node_id != n_id
# assert split_node_id != BDD.true_node_id() and split_node_id != BDD.false_node_id()
#
# node_data = BDD.get_node_data(new_ctx, split_node_id)
# assert node_data.structure == Node.mk_split(element, p_id, i_id, n_id)
# assert node_data.ops_module == IntegerBoolOps
# assert new_ctx.bdd_store.next_node_id > ctx.bdd_store.next_node_id
# end
# end
#
# describe "union_bdds/3" do
# test "A U A = A", %{initial_ctx: ctx} do
# {ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps) # false_node_id
# {new_ctx, result_id} = Ops.union_bdds(ctx, a_id, a_id)
# assert result_id == a_id
# assert Map.has_key?(new_ctx.bdd_store.ops_cache, {:union, a_id, a_id})
# end
#
# test "A U True = True", %{initial_ctx: ctx} do
# {ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# true_id = BDD.true_node_id()
# {_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, true_id)
# assert result_id == true_id
# end
#
# test "A U False = A", %{initial_ctx: ctx} do
# {ctx, a_id} = Ops.leaf(ctx, true, IntegerBoolOps) # true_node_id
# false_id = BDD.false_node_id()
# {_new_ctx, result_id} = Ops.union_bdds(ctx, a_id, false_id)
# assert result_id == a_id
# end
#
# test "union of two distinct leaves", %{initial_ctx: ctx} do
# # leaf(false) U leaf(true) = leaf(true OR false) = leaf(true) -> true_node_id
# {ctx, leaf_false_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, leaf_true_id} = Ops.leaf(ctx, true, IntegerBoolOps) # This is BDD.true_node_id()
#
# {_new_ctx, result_id} = Ops.union_bdds(ctx, leaf_false_id, leaf_true_id)
# assert result_id == BDD.true_node_id()
# end
#
# test "union of two simple split nodes with same element", %{initial_ctx: ctx} do
# # BDD1: split(10, True, False, False)
# # BDD2: split(10, False, True, False)
# # Union: split(10, True U False, False U True, False U False)
# # = split(10, True, True, False)
#
# true_id = BDD.true_node_id()
# false_id = BDD.false_node_id()
#
# {ctx, bdd1_id} = Ops.split(ctx, 10, true_id, false_id, false_id, IntegerBoolOps)
# {ctx, bdd2_id} = Ops.split(ctx, 10, false_id, true_id, false_id, IntegerBoolOps)
#
# {final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
#
# # Expected structure
# {_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, true_id, true_id, false_id, IntegerBoolOps)
# assert union_id == expected_bdd_id
# end
#
# test "union of two simple split nodes with different elements (x1 < x2)", %{initial_ctx: ctx} do
# # BDD1: split(10, True, False, False)
# # BDD2: split(20, False, True, False)
# # Union (x1 < x2): split(10, p1, i1 U BDD2, n1)
# # = split(10, True, False U BDD2, False)
# # = split(10, True, BDD2, False)
#
# {ctx, bdd1_p1_id} = Ops.leaf(ctx, true, IntegerBoolOps)
# {ctx, bdd1_i1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, bdd1_n1_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, bdd1_id} = Ops.split(ctx, 10, bdd1_p1_id, bdd1_i1_id, bdd1_n1_id, IntegerBoolOps)
#
# {ctx, bdd2_p2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, bdd2_i2_id} = Ops.leaf(ctx, true, IntegerBoolOps)
# {ctx, bdd2_n2_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, bdd2_id} = Ops.split(ctx, 20, bdd2_p2_id, bdd2_i2_id, bdd2_n2_id, IntegerBoolOps)
#
# {final_ctx, union_id} = Ops.union_bdds(ctx, bdd1_id, bdd2_id)
#
# # Expected structure: split(10, True, BDD2, False)
# {_final_ctx, expected_bdd_id} = Ops.split(final_ctx, 10, bdd1_p1_id, bdd2_id, bdd1_n1_id, IntegerBoolOps)
# assert union_id == expected_bdd_id
# end
#
# test "uses cache for repeated union operations", %{initial_ctx: ctx} do
# {ctx, a_id} = Ops.leaf(ctx, false, IntegerBoolOps)
# {ctx, b_id} = Ops.leaf(ctx, true, IntegerBoolOps)
#
# {ctx_after_first_union, _result1_id} = Ops.union_bdds(ctx, a_id, b_id)
# cache_after_first = ctx_after_first_union.bdd_store.ops_cache
#
# {ctx_after_second_union, _result2_id} = Ops.union_bdds(ctx_after_first_union, a_id, b_id)
# # The BDD store itself (nodes, next_id) should not change on a cache hit.
# # The ops_cache map reference will be the same if the result was cached.
# assert ctx_after_second_union.bdd_store.ops_cache == cache_after_first
# assert ctx_after_second_union.bdd_store.next_node_id == ctx_after_first_union.bdd_store.next_node_id
# end
# end
# end

View File

@ -1,72 +1,72 @@
defmodule Tilly.BDD.StringBoolOpsTest do
use ExUnit.Case, async: true
alias Tilly.BDD.StringBoolOps
describe "compare_elements/2" do
test "correctly compares strings" do
assert StringBoolOps.compare_elements("apple", "banana") == :lt
assert StringBoolOps.compare_elements("banana", "apple") == :gt
assert StringBoolOps.compare_elements("cherry", "cherry") == :eq
end
end
describe "equal_element?/2" do
test "correctly checks string equality" do
assert StringBoolOps.equal_element?("apple", "apple") == true
assert StringBoolOps.equal_element?("apple", "banana") == false
end
end
describe "hash_element/1" do
test "hashes strings consistently" do
assert is_integer(StringBoolOps.hash_element("foo"))
assert StringBoolOps.hash_element("foo") == StringBoolOps.hash_element("foo")
assert StringBoolOps.hash_element("foo") != StringBoolOps.hash_element("bar")
end
end
describe "leaf operations" do
test "empty_leaf/0 returns false" do
assert StringBoolOps.empty_leaf() == false
end
test "any_leaf/0 returns true" do
assert StringBoolOps.any_leaf() == true
end
test "is_empty_leaf?/1" do
assert StringBoolOps.is_empty_leaf?(false) == true
assert StringBoolOps.is_empty_leaf?(true) == false
end
test "union_leaves/3" do
assert StringBoolOps.union_leaves(%{}, false, false) == false
assert StringBoolOps.union_leaves(%{}, true, false) == true
assert StringBoolOps.union_leaves(%{}, false, true) == true
assert StringBoolOps.union_leaves(%{}, true, true) == true
end
test "intersection_leaves/3" do
assert StringBoolOps.intersection_leaves(%{}, false, false) == false
assert StringBoolOps.intersection_leaves(%{}, true, false) == false
assert StringBoolOps.intersection_leaves(%{}, false, true) == false
assert StringBoolOps.intersection_leaves(%{}, true, true) == true
end
test "negation_leaf/2" do
assert StringBoolOps.negation_leaf(%{}, false) == true
assert StringBoolOps.negation_leaf(%{}, true) == false
end
end
describe "test_leaf_value/1" do
test "returns :empty for false" do
assert StringBoolOps.test_leaf_value(false) == :empty
end
test "returns :full for true" do
assert StringBoolOps.test_leaf_value(true) == :full
end
end
end
# defmodule Tilly.BDD.StringBoolOpsTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD.StringBoolOps
#
# describe "compare_elements/2" do
# test "correctly compares strings" do
# assert StringBoolOps.compare_elements("apple", "banana") == :lt
# assert StringBoolOps.compare_elements("banana", "apple") == :gt
# assert StringBoolOps.compare_elements("cherry", "cherry") == :eq
# end
# end
#
# describe "equal_element?/2" do
# test "correctly checks string equality" do
# assert StringBoolOps.equal_element?("apple", "apple") == true
# assert StringBoolOps.equal_element?("apple", "banana") == false
# end
# end
#
# describe "hash_element/1" do
# test "hashes strings consistently" do
# assert is_integer(StringBoolOps.hash_element("foo"))
# assert StringBoolOps.hash_element("foo") == StringBoolOps.hash_element("foo")
# assert StringBoolOps.hash_element("foo") != StringBoolOps.hash_element("bar")
# end
# end
#
# describe "leaf operations" do
# test "empty_leaf/0 returns false" do
# assert StringBoolOps.empty_leaf() == false
# end
#
# test "any_leaf/0 returns true" do
# assert StringBoolOps.any_leaf() == true
# end
#
# test "is_empty_leaf?/1" do
# assert StringBoolOps.is_empty_leaf?(false) == true
# assert StringBoolOps.is_empty_leaf?(true) == false
# end
#
# test "union_leaves/3" do
# assert StringBoolOps.union_leaves(%{}, false, false) == false
# assert StringBoolOps.union_leaves(%{}, true, false) == true
# assert StringBoolOps.union_leaves(%{}, false, true) == true
# assert StringBoolOps.union_leaves(%{}, true, true) == true
# end
#
# test "intersection_leaves/3" do
# assert StringBoolOps.intersection_leaves(%{}, false, false) == false
# assert StringBoolOps.intersection_leaves(%{}, true, false) == false
# assert StringBoolOps.intersection_leaves(%{}, false, true) == false
# assert StringBoolOps.intersection_leaves(%{}, true, true) == true
# end
#
# test "negation_leaf/2" do
# assert StringBoolOps.negation_leaf(%{}, false) == true
# assert StringBoolOps.negation_leaf(%{}, true) == false
# end
# end
#
# describe "test_leaf_value/1" do
# test "returns :empty for false" do
# assert StringBoolOps.test_leaf_value(false) == :empty
# end
#
# test "returns :full for true" do
# assert StringBoolOps.test_leaf_value(true) == :full
# end
# end
# end

View File

@ -1,163 +1,163 @@
defmodule Tilly.BDDTest do
use ExUnit.Case, async: true
alias Tilly.BDD.Node
describe "init_bdd_store/1" do
test "initializes bdd_store in typing_ctx with predefined false and true nodes" do
typing_ctx = %{}
new_ctx = Tilly.BDD.init_bdd_store(typing_ctx)
assert %{bdd_store: bdd_store} = new_ctx
assert is_map(bdd_store.nodes_by_structure)
assert is_map(bdd_store.structures_by_id)
assert bdd_store.next_node_id == 2 # 0 for false, 1 for true
assert bdd_store.ops_cache == %{}
# Check false node
false_id = Tilly.BDD.false_node_id()
false_ops_module = Tilly.BDD.universal_ops_module()
assert bdd_store.nodes_by_structure[{Node.mk_false(), false_ops_module}] == false_id
assert bdd_store.structures_by_id[false_id] == %{structure: Node.mk_false(), ops_module: false_ops_module}
# Check true node
true_id = Tilly.BDD.true_node_id()
true_ops_module = Tilly.BDD.universal_ops_module()
assert bdd_store.nodes_by_structure[{Node.mk_true(), true_ops_module}] == true_id
assert bdd_store.structures_by_id[true_id] == %{structure: Node.mk_true(), ops_module: true_ops_module}
end
end
describe "get_or_intern_node/3" do
setup do
typing_ctx = Tilly.BDD.init_bdd_store(%{})
%{initial_ctx: typing_ctx}
end
test "interning Node.mk_false() returns predefined false_id and doesn't change store", %{initial_ctx: ctx} do
false_ops_module = Tilly.BDD.universal_ops_module()
{new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_false(), false_ops_module)
assert node_id == Tilly.BDD.false_node_id()
assert new_ctx.bdd_store == ctx.bdd_store
end
test "interning Node.mk_true() returns predefined true_id and doesn't change store", %{initial_ctx: ctx} do
true_ops_module = Tilly.BDD.universal_ops_module()
{new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_true(), true_ops_module)
assert node_id == Tilly.BDD.true_node_id()
assert new_ctx.bdd_store == ctx.bdd_store
end
test "interning a new leaf node returns a new ID and updates the store", %{initial_ctx: ctx} do
leaf_structure = Node.mk_leaf("test_leaf")
ops_mod = :my_ops
{ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
assert node_id == 2 # Initial next_node_id
assert ctx_after_intern.bdd_store.next_node_id == 3
assert ctx_after_intern.bdd_store.nodes_by_structure[{leaf_structure, ops_mod}] == node_id
assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: leaf_structure, ops_module: ops_mod}
end
test "interning the same leaf node again returns the same ID and doesn't change store", %{initial_ctx: ctx} do
leaf_structure = Node.mk_leaf("test_leaf")
ops_mod = :my_ops
{ctx_after_first_intern, first_node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
{ctx_after_second_intern, second_node_id} = Tilly.BDD.get_or_intern_node(ctx_after_first_intern, leaf_structure, ops_mod)
assert first_node_id == second_node_id
assert ctx_after_first_intern.bdd_store == ctx_after_second_intern.bdd_store
end
test "interning a new split node returns a new ID and updates the store", %{initial_ctx: ctx} do
split_structure = Node.mk_split(:el, Tilly.BDD.true_node_id(), Tilly.BDD.false_node_id(), Tilly.BDD.true_node_id())
ops_mod = :split_ops
{ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, split_structure, ops_mod)
assert node_id == 2 # Initial next_node_id
assert ctx_after_intern.bdd_store.next_node_id == 3
assert ctx_after_intern.bdd_store.nodes_by_structure[{split_structure, ops_mod}] == node_id
assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: split_structure, ops_module: ops_mod}
end
test "interning structurally identical nodes with different ops_modules results in different IDs", %{initial_ctx: ctx} do
leaf_structure = Node.mk_leaf("shared_leaf")
ops_mod1 = :ops1
ops_mod2 = :ops2
{ctx1, id1} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod1)
{_ctx2, id2} = Tilly.BDD.get_or_intern_node(ctx1, leaf_structure, ops_mod2)
assert id1 != id2
assert id1 == 2
assert id2 == 3
end
test "raises ArgumentError if bdd_store is not initialized" do
assert_raise ArgumentError, ~r/BDD store not initialized/, fn ->
Tilly.BDD.get_or_intern_node(%{}, Node.mk_leaf("foo"), :ops)
end
end
end
describe "get_node_data/2" do
setup do
ctx = Tilly.BDD.init_bdd_store(%{})
leaf_structure = Node.mk_leaf("data")
ops_mod = :leaf_ops
{new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
%{ctx: new_ctx, leaf_structure: leaf_structure, ops_mod: ops_mod, leaf_id: leaf_id_val}
end
test "returns correct data for false node", %{ctx: ctx} do
false_id = Tilly.BDD.false_node_id()
false_ops_module = Tilly.BDD.universal_ops_module()
assert Tilly.BDD.get_node_data(ctx, false_id) == %{structure: Node.mk_false(), ops_module: false_ops_module}
end
test "returns correct data for true node", %{ctx: ctx} do
true_id = Tilly.BDD.true_node_id()
true_ops_module = Tilly.BDD.universal_ops_module()
assert Tilly.BDD.get_node_data(ctx, true_id) == %{structure: Node.mk_true(), ops_module: true_ops_module}
end
test "returns correct data for a custom interned leaf node", %{ctx: ctx, leaf_structure: ls, ops_mod: om, leaf_id: id} do
assert Tilly.BDD.get_node_data(ctx, id) == %{structure: ls, ops_module: om}
end
test "returns nil for an unknown node ID", %{ctx: ctx} do
assert Tilly.BDD.get_node_data(ctx, 999) == nil
end
test "returns nil if bdd_store not in ctx" do
assert Tilly.BDD.get_node_data(%{}, 0) == nil
end
end
describe "is_false_node?/2 and is_true_node?/2" do
setup do
ctx = Tilly.BDD.init_bdd_store(%{})
leaf_structure = Node.mk_leaf("data")
ops_mod = :leaf_ops
{new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
%{ctx: new_ctx, leaf_id: leaf_id_val}
end
test "is_false_node?/2", %{ctx: ctx, leaf_id: id} do
assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.false_node_id()) == true
assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.true_node_id()) == false
assert Tilly.BDD.is_false_node?(ctx, id) == false
assert Tilly.BDD.is_false_node?(ctx, 999) == false # Unknown ID
end
test "is_true_node?/2", %{ctx: ctx, leaf_id: id} do
assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.true_node_id()) == true
assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.false_node_id()) == false
assert Tilly.BDD.is_true_node?(ctx, id) == false
assert Tilly.BDD.is_true_node?(ctx, 999) == false # Unknown ID
end
end
end
# defmodule Tilly.BDDTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD.Node
#
# describe "init_bdd_store/1" do
# test "initializes bdd_store in typing_ctx with predefined false and true nodes" do
# typing_ctx = %{}
# new_ctx = Tilly.BDD.init_bdd_store(typing_ctx)
#
# assert %{bdd_store: bdd_store} = new_ctx
# assert is_map(bdd_store.nodes_by_structure)
# assert is_map(bdd_store.structures_by_id)
# assert bdd_store.next_node_id == 2 # 0 for false, 1 for true
# assert bdd_store.ops_cache == %{}
#
# # Check false node
# false_id = Tilly.BDD.false_node_id()
# false_ops_module = Tilly.BDD.universal_ops_module()
# assert bdd_store.nodes_by_structure[{Node.mk_false(), false_ops_module}] == false_id
# assert bdd_store.structures_by_id[false_id] == %{structure: Node.mk_false(), ops_module: false_ops_module}
#
# # Check true node
# true_id = Tilly.BDD.true_node_id()
# true_ops_module = Tilly.BDD.universal_ops_module()
# assert bdd_store.nodes_by_structure[{Node.mk_true(), true_ops_module}] == true_id
# assert bdd_store.structures_by_id[true_id] == %{structure: Node.mk_true(), ops_module: true_ops_module}
# end
# end
#
# describe "get_or_intern_node/3" do
# setup do
# typing_ctx = Tilly.BDD.init_bdd_store(%{})
# %{initial_ctx: typing_ctx}
# end
#
# test "interning Node.mk_false() returns predefined false_id and doesn't change store", %{initial_ctx: ctx} do
# false_ops_module = Tilly.BDD.universal_ops_module()
# {new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_false(), false_ops_module)
# assert node_id == Tilly.BDD.false_node_id()
# assert new_ctx.bdd_store == ctx.bdd_store
# end
#
# test "interning Node.mk_true() returns predefined true_id and doesn't change store", %{initial_ctx: ctx} do
# true_ops_module = Tilly.BDD.universal_ops_module()
# {new_ctx, node_id} = Tilly.BDD.get_or_intern_node(ctx, Node.mk_true(), true_ops_module)
# assert node_id == Tilly.BDD.true_node_id()
# assert new_ctx.bdd_store == ctx.bdd_store
# end
#
# test "interning a new leaf node returns a new ID and updates the store", %{initial_ctx: ctx} do
# leaf_structure = Node.mk_leaf("test_leaf")
# ops_mod = :my_ops
#
# {ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
#
# assert node_id == 2 # Initial next_node_id
# assert ctx_after_intern.bdd_store.next_node_id == 3
# assert ctx_after_intern.bdd_store.nodes_by_structure[{leaf_structure, ops_mod}] == node_id
# assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: leaf_structure, ops_module: ops_mod}
# end
#
# test "interning the same leaf node again returns the same ID and doesn't change store", %{initial_ctx: ctx} do
# leaf_structure = Node.mk_leaf("test_leaf")
# ops_mod = :my_ops
#
# {ctx_after_first_intern, first_node_id} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
# {ctx_after_second_intern, second_node_id} = Tilly.BDD.get_or_intern_node(ctx_after_first_intern, leaf_structure, ops_mod)
#
# assert first_node_id == second_node_id
# assert ctx_after_first_intern.bdd_store == ctx_after_second_intern.bdd_store
# end
#
# test "interning a new split node returns a new ID and updates the store", %{initial_ctx: ctx} do
# split_structure = Node.mk_split(:el, Tilly.BDD.true_node_id(), Tilly.BDD.false_node_id(), Tilly.BDD.true_node_id())
# ops_mod = :split_ops
#
# {ctx_after_intern, node_id} = Tilly.BDD.get_or_intern_node(ctx, split_structure, ops_mod)
#
# assert node_id == 2 # Initial next_node_id
# assert ctx_after_intern.bdd_store.next_node_id == 3
# assert ctx_after_intern.bdd_store.nodes_by_structure[{split_structure, ops_mod}] == node_id
# assert ctx_after_intern.bdd_store.structures_by_id[node_id] == %{structure: split_structure, ops_module: ops_mod}
# end
#
# test "interning structurally identical nodes with different ops_modules results in different IDs", %{initial_ctx: ctx} do
# leaf_structure = Node.mk_leaf("shared_leaf")
# ops_mod1 = :ops1
# ops_mod2 = :ops2
#
# {ctx1, id1} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod1)
# {_ctx2, id2} = Tilly.BDD.get_or_intern_node(ctx1, leaf_structure, ops_mod2)
#
# assert id1 != id2
# assert id1 == 2
# assert id2 == 3
# end
#
# test "raises ArgumentError if bdd_store is not initialized" do
# assert_raise ArgumentError, ~r/BDD store not initialized/, fn ->
# Tilly.BDD.get_or_intern_node(%{}, Node.mk_leaf("foo"), :ops)
# end
# end
# end
#
# describe "get_node_data/2" do
# setup do
# ctx = Tilly.BDD.init_bdd_store(%{})
# leaf_structure = Node.mk_leaf("data")
# ops_mod = :leaf_ops
# {new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
# %{ctx: new_ctx, leaf_structure: leaf_structure, ops_mod: ops_mod, leaf_id: leaf_id_val}
# end
#
# test "returns correct data for false node", %{ctx: ctx} do
# false_id = Tilly.BDD.false_node_id()
# false_ops_module = Tilly.BDD.universal_ops_module()
# assert Tilly.BDD.get_node_data(ctx, false_id) == %{structure: Node.mk_false(), ops_module: false_ops_module}
# end
#
# test "returns correct data for true node", %{ctx: ctx} do
# true_id = Tilly.BDD.true_node_id()
# true_ops_module = Tilly.BDD.universal_ops_module()
# assert Tilly.BDD.get_node_data(ctx, true_id) == %{structure: Node.mk_true(), ops_module: true_ops_module}
# end
#
# test "returns correct data for a custom interned leaf node", %{ctx: ctx, leaf_structure: ls, ops_mod: om, leaf_id: id} do
# assert Tilly.BDD.get_node_data(ctx, id) == %{structure: ls, ops_module: om}
# end
#
# test "returns nil for an unknown node ID", %{ctx: ctx} do
# assert Tilly.BDD.get_node_data(ctx, 999) == nil
# end
#
# test "returns nil if bdd_store not in ctx" do
# assert Tilly.BDD.get_node_data(%{}, 0) == nil
# end
# end
#
# describe "is_false_node?/2 and is_true_node?/2" do
# setup do
# ctx = Tilly.BDD.init_bdd_store(%{})
# leaf_structure = Node.mk_leaf("data")
# ops_mod = :leaf_ops
# {new_ctx, leaf_id_val} = Tilly.BDD.get_or_intern_node(ctx, leaf_structure, ops_mod)
# %{ctx: new_ctx, leaf_id: leaf_id_val}
# end
#
# test "is_false_node?/2", %{ctx: ctx, leaf_id: id} do
# assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.false_node_id()) == true
# assert Tilly.BDD.is_false_node?(ctx, Tilly.BDD.true_node_id()) == false
# assert Tilly.BDD.is_false_node?(ctx, id) == false
# assert Tilly.BDD.is_false_node?(ctx, 999) == false # Unknown ID
# end
#
# test "is_true_node?/2", %{ctx: ctx, leaf_id: id} do
# assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.true_node_id()) == true
# assert Tilly.BDD.is_true_node?(ctx, Tilly.BDD.false_node_id()) == false
# assert Tilly.BDD.is_true_node?(ctx, id) == false
# assert Tilly.BDD.is_true_node?(ctx, 999) == false # Unknown ID
# end
# end
# end

View File

@ -1,162 +1,162 @@
defmodule Tilly.Type.OpsTest do
use ExUnit.Case, async: true
alias Tilly.BDD
alias Tilly.Type.Store
alias Tilly.Type.Ops
defp init_context do
%{}
|> BDD.init_bdd_store()
|> Store.init_type_store()
end
describe "get_type_nothing/1 and get_type_any/1" do
test "get_type_nothing returns an interned Descr ID for the empty type" do
ctx = init_context()
{ctx_after_nothing, nothing_id} = Ops.get_type_nothing(ctx)
assert Ops.is_empty_type?(ctx_after_nothing, nothing_id)
end
test "get_type_any returns an interned Descr ID for the universal type" do
ctx = init_context()
{ctx_after_any, any_id} = Ops.get_type_any(ctx)
refute Ops.is_empty_type?(ctx_after_any, any_id)
# Further check: any type negated should be nothing type
{ctx1, neg_any_id} = Ops.negation_type(ctx_after_any, any_id)
{ctx2, nothing_id} = Ops.get_type_nothing(ctx1)
assert neg_any_id == nothing_id
end
end
describe "literal type constructors" do
test "create_atom_literal_type/2" do
ctx = init_context()
{ctx1, atom_foo_id} = Ops.create_atom_literal_type(ctx, :foo)
{ctx2, atom_bar_id} = Ops.create_atom_literal_type(ctx1, :bar)
{ctx3, atom_foo_again_id} = Ops.create_atom_literal_type(ctx2, :foo)
refute Ops.is_empty_type?(ctx3, atom_foo_id)
refute Ops.is_empty_type?(ctx3, atom_bar_id)
assert atom_foo_id != atom_bar_id
assert atom_foo_id == atom_foo_again_id
# Test intersection: (:foo & :bar) should be Nothing
{ctx4, intersection_id} = Ops.intersection_types(ctx3, atom_foo_id, atom_bar_id)
assert Ops.is_empty_type?(ctx4, intersection_id)
# Test union: (:foo | :bar) should not be empty
{ctx5, union_id} = Ops.union_types(ctx4, atom_foo_id, atom_bar_id)
refute Ops.is_empty_type?(ctx5, union_id)
# Test negation: (not :foo) should not be empty and not be :foo
{ctx6, not_foo_id} = Ops.negation_type(ctx5, atom_foo_id)
refute Ops.is_empty_type?(ctx6, not_foo_id)
{ctx7, intersection_not_foo_and_foo} = Ops.intersection_types(ctx6, atom_foo_id, not_foo_id)
assert Ops.is_empty_type?(ctx7, intersection_not_foo_and_foo)
end
test "create_integer_literal_type/2" do
ctx = init_context()
{ctx1, int_1_id} = Ops.create_integer_literal_type(ctx, 1)
{ctx2, int_2_id} = Ops.create_integer_literal_type(ctx1, 2)
refute Ops.is_empty_type?(ctx2, int_1_id) # Use ctx2
{ctx3, intersection_id} = Ops.intersection_types(ctx2, int_1_id, int_2_id)
assert Ops.is_empty_type?(ctx3, intersection_id)
end
test "create_string_literal_type/2" do
ctx = init_context()
{ctx1, str_a_id} = Ops.create_string_literal_type(ctx, "a")
{ctx2, str_b_id} = Ops.create_string_literal_type(ctx1, "b")
refute Ops.is_empty_type?(ctx2, str_a_id) # Use ctx2
{ctx3, intersection_id} = Ops.intersection_types(ctx2, str_a_id, str_b_id)
assert Ops.is_empty_type?(ctx3, intersection_id)
end
end
describe "primitive type constructors (any_of_kind)" do
test "get_primitive_type_any_atom/1" do
ctx = init_context()
{ctx1, any_atom_id} = Ops.get_primitive_type_any_atom(ctx)
{ctx2, atom_foo_id} = Ops.create_atom_literal_type(ctx1, :foo)
refute Ops.is_empty_type?(ctx2, any_atom_id)
# :foo should be a subtype of AnyAtom (i.e., :foo INTERSECTION (NEGATION AnyAtom) == Empty)
# Or, :foo UNION AnyAtom == AnyAtom
# Or, :foo INTERSECTION AnyAtom == :foo
{ctx3, intersection_foo_any_atom_id} = Ops.intersection_types(ctx2, atom_foo_id, any_atom_id)
assert intersection_foo_any_atom_id == atom_foo_id # Check it simplifies to :foo
# Test original subtype logic: (:foo & (not AnyAtom)) == Empty
{ctx4, not_any_atom_id} = Ops.negation_type(ctx3, any_atom_id) # Use ctx3
{ctx5, intersection_subtype_check_id} = Ops.intersection_types(ctx4, atom_foo_id, not_any_atom_id)
assert Ops.is_empty_type?(ctx5, intersection_subtype_check_id)
# AnyAtom & AnyInteger should be Empty
{ctx6, any_integer_id} = Ops.get_primitive_type_any_integer(ctx5) # Use ctx5
{ctx7, atom_int_intersect_id} = Ops.intersection_types(ctx6, any_atom_id, any_integer_id)
assert Ops.is_empty_type?(ctx7, atom_int_intersect_id)
end
end
describe "union_types, intersection_types, negation_type" do
test "basic set properties" do
ctx0 = init_context()
{ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
{ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
{ctx3, type_c_id} = Ops.create_atom_literal_type(ctx2, :c)
{ctx4, nothing_id} = Ops.get_type_nothing(ctx3)
# A | Nothing = A
{ctx5, union_a_nothing_id} = Ops.union_types(ctx4, type_a_id, nothing_id)
assert union_a_nothing_id == type_a_id
# A & Nothing = Nothing
{ctx6, intersect_a_nothing_id} = Ops.intersection_types(ctx5, type_a_id, nothing_id)
assert intersect_a_nothing_id == nothing_id
# not (not A) = A
{ctx7, not_a_id} = Ops.negation_type(ctx6, type_a_id)
{ctx8, not_not_a_id} = Ops.negation_type(ctx7, not_a_id)
assert not_not_a_id == type_a_id
# A | B
{ctx9, union_ab_id} = Ops.union_types(ctx8, type_a_id, type_b_id)
# (A | B) & A = A
{ctx10, intersect_union_a_id} = Ops.intersection_types(ctx9, union_ab_id, type_a_id)
assert intersect_union_a_id == type_a_id
# (A | B) & C = Nothing (if A, B, C are distinct atom literals)
{ctx11, intersect_union_c_id} = Ops.intersection_types(ctx10, union_ab_id, type_c_id)
assert Ops.is_empty_type?(ctx11, intersect_union_c_id)
# Commutativity and idempotence of union/intersection are implicitly tested by caching
# and canonical key generation in apply_type_op.
end
test "type operations are cached" do
ctx0 = init_context()
{ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
{ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
# Perform an operation
{ctx3, union1_id} = Ops.union_types(ctx2, type_a_id, type_b_id)
initial_cache_size = map_size(ctx3.type_store.ops_cache)
assert initial_cache_size > 0 # Ensure something was cached
# Perform the same operation again
{ctx4, union2_id} = Ops.union_types(ctx3, type_a_id, type_b_id)
assert union1_id == union2_id
assert map_size(ctx4.type_store.ops_cache) == initial_cache_size # Cache size should not change
# Perform with swapped arguments (commutative)
{ctx5, union3_id} = Ops.union_types(ctx4, type_b_id, type_a_id)
assert union1_id == union3_id
assert map_size(ctx5.type_store.ops_cache) == initial_cache_size # Cache size should not change
end
end
end
# defmodule Tilly.Type.OpsTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD
# alias Tilly.Type.Store
# alias Tilly.Type.Ops
#
# defp init_context do
# %{}
# |> BDD.init_bdd_store()
# |> Store.init_type_store()
# end
#
# describe "get_type_nothing/1 and get_type_any/1" do
# test "get_type_nothing returns an interned Descr ID for the empty type" do
# ctx = init_context()
# {ctx_after_nothing, nothing_id} = Ops.get_type_nothing(ctx)
# assert Ops.is_empty_type?(ctx_after_nothing, nothing_id)
# end
#
# test "get_type_any returns an interned Descr ID for the universal type" do
# ctx = init_context()
# {ctx_after_any, any_id} = Ops.get_type_any(ctx)
# refute Ops.is_empty_type?(ctx_after_any, any_id)
#
# # Further check: any type negated should be nothing type
# {ctx1, neg_any_id} = Ops.negation_type(ctx_after_any, any_id)
# {ctx2, nothing_id} = Ops.get_type_nothing(ctx1)
# assert neg_any_id == nothing_id
# end
# end
#
# describe "literal type constructors" do
# test "create_atom_literal_type/2" do
# ctx = init_context()
# {ctx1, atom_foo_id} = Ops.create_atom_literal_type(ctx, :foo)
# {ctx2, atom_bar_id} = Ops.create_atom_literal_type(ctx1, :bar)
# {ctx3, atom_foo_again_id} = Ops.create_atom_literal_type(ctx2, :foo)
#
# refute Ops.is_empty_type?(ctx3, atom_foo_id)
# refute Ops.is_empty_type?(ctx3, atom_bar_id)
# assert atom_foo_id != atom_bar_id
# assert atom_foo_id == atom_foo_again_id
#
# # Test intersection: (:foo & :bar) should be Nothing
# {ctx4, intersection_id} = Ops.intersection_types(ctx3, atom_foo_id, atom_bar_id)
# assert Ops.is_empty_type?(ctx4, intersection_id)
#
# # Test union: (:foo | :bar) should not be empty
# {ctx5, union_id} = Ops.union_types(ctx4, atom_foo_id, atom_bar_id)
# refute Ops.is_empty_type?(ctx5, union_id)
#
# # Test negation: (not :foo) should not be empty and not be :foo
# {ctx6, not_foo_id} = Ops.negation_type(ctx5, atom_foo_id)
# refute Ops.is_empty_type?(ctx6, not_foo_id)
# {ctx7, intersection_not_foo_and_foo} = Ops.intersection_types(ctx6, atom_foo_id, not_foo_id)
# assert Ops.is_empty_type?(ctx7, intersection_not_foo_and_foo)
# end
#
# test "create_integer_literal_type/2" do
# ctx = init_context()
# {ctx1, int_1_id} = Ops.create_integer_literal_type(ctx, 1)
# {ctx2, int_2_id} = Ops.create_integer_literal_type(ctx1, 2)
#
# refute Ops.is_empty_type?(ctx2, int_1_id) # Use ctx2
# {ctx3, intersection_id} = Ops.intersection_types(ctx2, int_1_id, int_2_id)
# assert Ops.is_empty_type?(ctx3, intersection_id)
# end
#
# test "create_string_literal_type/2" do
# ctx = init_context()
# {ctx1, str_a_id} = Ops.create_string_literal_type(ctx, "a")
# {ctx2, str_b_id} = Ops.create_string_literal_type(ctx1, "b")
#
# refute Ops.is_empty_type?(ctx2, str_a_id) # Use ctx2
# {ctx3, intersection_id} = Ops.intersection_types(ctx2, str_a_id, str_b_id)
# assert Ops.is_empty_type?(ctx3, intersection_id)
# end
# end
#
# describe "primitive type constructors (any_of_kind)" do
# test "get_primitive_type_any_atom/1" do
# ctx = init_context()
# {ctx1, any_atom_id} = Ops.get_primitive_type_any_atom(ctx)
# {ctx2, atom_foo_id} = Ops.create_atom_literal_type(ctx1, :foo)
#
# refute Ops.is_empty_type?(ctx2, any_atom_id)
# # :foo should be a subtype of AnyAtom (i.e., :foo INTERSECTION (NEGATION AnyAtom) == Empty)
# # Or, :foo UNION AnyAtom == AnyAtom
# # Or, :foo INTERSECTION AnyAtom == :foo
# {ctx3, intersection_foo_any_atom_id} = Ops.intersection_types(ctx2, atom_foo_id, any_atom_id)
# assert intersection_foo_any_atom_id == atom_foo_id # Check it simplifies to :foo
#
# # Test original subtype logic: (:foo & (not AnyAtom)) == Empty
# {ctx4, not_any_atom_id} = Ops.negation_type(ctx3, any_atom_id) # Use ctx3
# {ctx5, intersection_subtype_check_id} = Ops.intersection_types(ctx4, atom_foo_id, not_any_atom_id)
# assert Ops.is_empty_type?(ctx5, intersection_subtype_check_id)
#
# # AnyAtom & AnyInteger should be Empty
# {ctx6, any_integer_id} = Ops.get_primitive_type_any_integer(ctx5) # Use ctx5
# {ctx7, atom_int_intersect_id} = Ops.intersection_types(ctx6, any_atom_id, any_integer_id)
# assert Ops.is_empty_type?(ctx7, atom_int_intersect_id)
# end
# end
#
# describe "union_types, intersection_types, negation_type" do
# test "basic set properties" do
# ctx0 = init_context()
# {ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
# {ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
# {ctx3, type_c_id} = Ops.create_atom_literal_type(ctx2, :c)
# {ctx4, nothing_id} = Ops.get_type_nothing(ctx3)
#
# # A | Nothing = A
# {ctx5, union_a_nothing_id} = Ops.union_types(ctx4, type_a_id, nothing_id)
# assert union_a_nothing_id == type_a_id
#
# # A & Nothing = Nothing
# {ctx6, intersect_a_nothing_id} = Ops.intersection_types(ctx5, type_a_id, nothing_id)
# assert intersect_a_nothing_id == nothing_id
#
# # not (not A) = A
# {ctx7, not_a_id} = Ops.negation_type(ctx6, type_a_id)
# {ctx8, not_not_a_id} = Ops.negation_type(ctx7, not_a_id)
# assert not_not_a_id == type_a_id
#
# # A | B
# {ctx9, union_ab_id} = Ops.union_types(ctx8, type_a_id, type_b_id)
# # (A | B) & A = A
# {ctx10, intersect_union_a_id} = Ops.intersection_types(ctx9, union_ab_id, type_a_id)
# assert intersect_union_a_id == type_a_id
#
# # (A | B) & C = Nothing (if A, B, C are distinct atom literals)
# {ctx11, intersect_union_c_id} = Ops.intersection_types(ctx10, union_ab_id, type_c_id)
# assert Ops.is_empty_type?(ctx11, intersect_union_c_id)
#
# # Commutativity and idempotence of union/intersection are implicitly tested by caching
# # and canonical key generation in apply_type_op.
# end
#
# test "type operations are cached" do
# ctx0 = init_context()
# {ctx1, type_a_id} = Ops.create_atom_literal_type(ctx0, :a)
# {ctx2, type_b_id} = Ops.create_atom_literal_type(ctx1, :b)
#
# # Perform an operation
# {ctx3, union1_id} = Ops.union_types(ctx2, type_a_id, type_b_id)
# initial_cache_size = map_size(ctx3.type_store.ops_cache)
# assert initial_cache_size > 0 # Ensure something was cached
#
# # Perform the same operation again
# {ctx4, union2_id} = Ops.union_types(ctx3, type_a_id, type_b_id)
# assert union1_id == union2_id
# assert map_size(ctx4.type_store.ops_cache) == initial_cache_size # Cache size should not change
#
# # Perform with swapped arguments (commutative)
# {ctx5, union3_id} = Ops.union_types(ctx4, type_b_id, type_a_id)
# assert union1_id == union3_id
# assert map_size(ctx5.type_store.ops_cache) == initial_cache_size # Cache size should not change
# end
# end
# end

View File

@ -1,67 +1,67 @@
defmodule Tilly.Type.StoreTest do
use ExUnit.Case, async: true
alias Tilly.BDD
alias Tilly.Type
alias Tilly.Type.Store
defp init_context do
%{}
|> BDD.init_bdd_store()
|> Store.init_type_store()
end
describe "init_type_store/1" do
test "initializes an empty type store in the typing_ctx" do
typing_ctx = %{}
new_ctx = Store.init_type_store(typing_ctx)
type_store = Map.get(new_ctx, :type_store)
assert type_store.descrs_by_structure == %{}
assert type_store.structures_by_id == %{}
assert type_store.next_descr_id == 0
end
end
describe "get_or_intern_descr/2 and get_descr_by_id/2" do
test "interns a new Descr map and retrieves it" do
typing_ctx = init_context()
descr_map1 = Type.empty_descr(typing_ctx) # Uses canonical BDD.false_node_id()
# Intern first time
{ctx1, id1} = Store.get_or_intern_descr(typing_ctx, descr_map1)
assert id1 == 0
assert Store.get_descr_by_id(ctx1, id1) == descr_map1
assert ctx1.type_store.next_descr_id == 1
# Retrieve existing
{ctx2, id1_retrieved} = Store.get_or_intern_descr(ctx1, descr_map1)
assert id1_retrieved == id1
assert ctx2 == ctx1 # Context should not change if already interned
# Intern a different Descr map
descr_map2 = Type.any_descr(typing_ctx) # Uses canonical BDD.true_node_id()
{ctx3, id2} = Store.get_or_intern_descr(ctx2, descr_map2)
assert id2 == 1
assert Store.get_descr_by_id(ctx3, id2) == descr_map2
assert ctx3.type_store.next_descr_id == 2
# Ensure original is still retrievable
assert Store.get_descr_by_id(ctx3, id1) == descr_map1
end
test "get_descr_by_id returns nil for non-existent ID" do
typing_ctx = init_context()
assert Store.get_descr_by_id(typing_ctx, 999) == nil
end
test "raises an error if type store is not initialized" do
uninitialized_ctx = %{}
descr_map = Type.empty_descr(uninitialized_ctx)
assert_raise ArgumentError,
"Type store not initialized in typing_ctx. Call init_type_store first.",
fn -> Store.get_or_intern_descr(uninitialized_ctx, descr_map) end
end
end
end
# defmodule Tilly.Type.StoreTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD
# alias Tilly.Type
# alias Tilly.Type.Store
#
# defp init_context do
# %{}
# |> BDD.init_bdd_store()
# |> Store.init_type_store()
# end
#
# describe "init_type_store/1" do
# test "initializes an empty type store in the typing_ctx" do
# typing_ctx = %{}
# new_ctx = Store.init_type_store(typing_ctx)
# type_store = Map.get(new_ctx, :type_store)
#
# assert type_store.descrs_by_structure == %{}
# assert type_store.structures_by_id == %{}
# assert type_store.next_descr_id == 0
# end
# end
#
# describe "get_or_intern_descr/2 and get_descr_by_id/2" do
# test "interns a new Descr map and retrieves it" do
# typing_ctx = init_context()
# descr_map1 = Type.empty_descr(typing_ctx) # Uses canonical BDD.false_node_id()
#
# # Intern first time
# {ctx1, id1} = Store.get_or_intern_descr(typing_ctx, descr_map1)
# assert id1 == 0
# assert Store.get_descr_by_id(ctx1, id1) == descr_map1
# assert ctx1.type_store.next_descr_id == 1
#
# # Retrieve existing
# {ctx2, id1_retrieved} = Store.get_or_intern_descr(ctx1, descr_map1)
# assert id1_retrieved == id1
# assert ctx2 == ctx1 # Context should not change if already interned
#
# # Intern a different Descr map
# descr_map2 = Type.any_descr(typing_ctx) # Uses canonical BDD.true_node_id()
# {ctx3, id2} = Store.get_or_intern_descr(ctx2, descr_map2)
# assert id2 == 1
# assert Store.get_descr_by_id(ctx3, id2) == descr_map2
# assert ctx3.type_store.next_descr_id == 2
#
# # Ensure original is still retrievable
# assert Store.get_descr_by_id(ctx3, id1) == descr_map1
# end
#
# test "get_descr_by_id returns nil for non-existent ID" do
# typing_ctx = init_context()
# assert Store.get_descr_by_id(typing_ctx, 999) == nil
# end
#
# test "raises an error if type store is not initialized" do
# uninitialized_ctx = %{}
# descr_map = Type.empty_descr(uninitialized_ctx)
#
# assert_raise ArgumentError,
# "Type store not initialized in typing_ctx. Call init_type_store first.",
# fn -> Store.get_or_intern_descr(uninitialized_ctx, descr_map) end
# end
# end
# end

View File

@ -1,39 +1,39 @@
defmodule Tilly.TypeTest do
use ExUnit.Case, async: true
alias Tilly.BDD
alias Tilly.Type
describe "empty_descr/1" do
test "returns a Descr map with all BDD IDs pointing to false" do
typing_ctx = BDD.init_bdd_store(%{})
descr = Type.empty_descr(typing_ctx)
false_id = BDD.false_node_id()
assert descr.atoms_bdd_id == false_id
assert descr.integers_bdd_id == false_id
assert descr.strings_bdd_id == false_id
assert descr.pairs_bdd_id == false_id
assert descr.records_bdd_id == false_id
assert descr.functions_bdd_id == false_id
assert descr.absent_marker_bdd_id == false_id
end
end
describe "any_descr/1" do
test "returns a Descr map with BDD IDs pointing to true (and absent_marker to false)" do
typing_ctx = BDD.init_bdd_store(%{})
descr = Type.any_descr(typing_ctx)
true_id = BDD.true_node_id()
false_id = BDD.false_node_id()
assert descr.atoms_bdd_id == true_id
assert descr.integers_bdd_id == true_id
assert descr.strings_bdd_id == true_id
assert descr.pairs_bdd_id == true_id
assert descr.records_bdd_id == true_id
assert descr.functions_bdd_id == true_id
assert descr.absent_marker_bdd_id == false_id
end
end
end
# defmodule Tilly.TypeTest do
# use ExUnit.Case, async: true
#
# alias Tilly.BDD
# alias Tilly.Type
#
# describe "empty_descr/1" do
# test "returns a Descr map with all BDD IDs pointing to false" do
# typing_ctx = BDD.init_bdd_store(%{})
# descr = Type.empty_descr(typing_ctx)
# false_id = BDD.false_node_id()
#
# assert descr.atoms_bdd_id == false_id
# assert descr.integers_bdd_id == false_id
# assert descr.strings_bdd_id == false_id
# assert descr.pairs_bdd_id == false_id
# assert descr.records_bdd_id == false_id
# assert descr.functions_bdd_id == false_id
# assert descr.absent_marker_bdd_id == false_id
# end
# end
#
# describe "any_descr/1" do
# test "returns a Descr map with BDD IDs pointing to true (and absent_marker to false)" do
# typing_ctx = BDD.init_bdd_store(%{})
# descr = Type.any_descr(typing_ctx)
# true_id = BDD.true_node_id()
# false_id = BDD.false_node_id()
#
# assert descr.atoms_bdd_id == true_id
# assert descr.integers_bdd_id == true_id
# assert descr.strings_bdd_id == true_id
# assert descr.pairs_bdd_id == true_id
# assert descr.records_bdd_id == true_id
# assert descr.functions_bdd_id == true_id
# assert descr.absent_marker_bdd_id == false_id
# end
# end
# end

1822
test1.exs

File diff suppressed because it is too large Load Diff

796
tests.exs Normal file
View File

@ -0,0 +1,796 @@
# --- Example Usage ---
Tdd.init_tdd_system()
# Basic Types
tdd_foo = Tdd.type_atom_literal(:foo)
tdd_bar = Tdd.type_atom_literal(:bar)
tdd_atom = Tdd.type_atom()
tdd_empty_tuple = Tdd.type_empty_tuple()
tdd_any = Tdd.type_any()
tdd_none = Tdd.type_none()
test = fn name, expected, result ->
current_failures = Process.get(:test_failures, [])
if expected != result do
Process.put(:test_failures, [name | current_failures])
end
status = if expected == result, do: "PASSED", else: "FAILED"
IO.puts("#{name} (Expected: #{expected}) -> Result: #{result} - #{status}")
end
# Basic Types
tdd_foo = Tdd.type_atom_literal(:foo)
tdd_bar = Tdd.type_atom_literal(:bar)
tdd_baz = Tdd.type_atom_literal(:baz)
tdd_atom = Tdd.type_atom()
tdd_empty_tuple = Tdd.type_empty_tuple()
tdd_tuple = Tdd.type_tuple()
# Tuple of size 2, e.g. {any, any}
tdd_tuple_s2 = Tdd.type_tuple_sized_any(2)
tdd_any = Tdd.type_any()
tdd_none = Tdd.type_none()
test_all = fn ->
IO.puts("\n--- TDD for :foo ---")
Tdd.print_tdd(tdd_foo)
IO.puts("\n--- TDD for not :foo ---")
Tdd.print_tdd(Tdd.negate(tdd_foo))
IO.puts("\n--- TDD for atom ---")
Tdd.print_tdd(tdd_atom)
IO.puts("\n--- TDD for not atom ---")
# Expected: make_node(@v_is_atom, @false_node_id, @true_node_id, @true_node_id)
# This represents "anything that is not an atom". The DC branch becomes true because if
# "is_atom" test is irrelevant for "not atom", it means it's part of "not atom".
Tdd.print_tdd(Tdd.negate(tdd_atom))
IO.puts("\n--- TDD for :foo and :bar (should be none) ---")
tdd_foo_and_bar = Tdd.intersect(tdd_foo, tdd_bar)
# Expected ID 0: :false_terminal
Tdd.print_tdd(tdd_foo_and_bar)
IO.puts("\n--- TDD for :foo and atom (should be :foo) ---")
tdd_foo_and_atom = Tdd.intersect(tdd_foo, tdd_atom)
# Expected to be structurally identical to tdd_foo
Tdd.print_tdd(tdd_foo_and_atom)
IO.puts("\n--- Basic Subtyping Tests ---")
test.(":foo <: atom", true, Tdd.is_subtype(tdd_foo, tdd_atom))
test.("atom <: :foo", false, Tdd.is_subtype(tdd_atom, tdd_foo))
test.(":foo <: :bar", false, Tdd.is_subtype(tdd_foo, tdd_bar))
test.(":foo <: :foo", true, Tdd.is_subtype(tdd_foo, tdd_foo))
test.("{} <: tuple", true, Tdd.is_subtype(tdd_empty_tuple, tdd_tuple))
test.("tuple <: {}", false, Tdd.is_subtype(tdd_tuple, tdd_empty_tuple))
test.(":foo <: {}", false, Tdd.is_subtype(tdd_foo, tdd_empty_tuple))
test.("tuple_size_2 <: tuple", true, Tdd.is_subtype(tdd_tuple_s2, tdd_tuple))
test.("tuple <: tuple_size_2", false, Tdd.is_subtype(tdd_tuple, tdd_tuple_s2))
test.("tuple_size_2 <: {}", false, Tdd.is_subtype(tdd_tuple_s2, tdd_empty_tuple))
IO.puts("\n--- Any/None Subtyping Tests ---")
test.("any <: :foo", false, Tdd.is_subtype(tdd_any, tdd_foo))
test.(":foo <: any", true, Tdd.is_subtype(tdd_foo, tdd_any))
test.("none <: :foo", true, Tdd.is_subtype(tdd_none, tdd_foo))
test.(":foo <: none", false, Tdd.is_subtype(tdd_foo, tdd_none))
test.("none <: any", true, Tdd.is_subtype(tdd_none, tdd_any))
test.("any <: none", false, Tdd.is_subtype(tdd_any, tdd_none))
test.("any <: any", true, Tdd.is_subtype(tdd_any, tdd_any))
test.("none <: none", true, Tdd.is_subtype(tdd_none, tdd_none))
IO.puts("\n--- Union related Subtyping ---")
tdd_foo_or_bar = Tdd.sum(tdd_foo, tdd_bar)
tdd_foo_or_bar_or_baz = Tdd.sum(tdd_foo_or_bar, tdd_baz)
test.(":foo <: (:foo | :bar)", true, Tdd.is_subtype(tdd_foo, tdd_foo_or_bar))
test.(":baz <: (:foo | :bar)", false, Tdd.is_subtype(tdd_baz, tdd_foo_or_bar))
test.("(:foo | :bar) <: atom", true, Tdd.is_subtype(tdd_foo_or_bar, tdd_atom))
test.("atom <: (:foo | :bar)", false, Tdd.is_subtype(tdd_atom, tdd_foo_or_bar))
test.(
"(:foo | :bar) <: (:foo | :bar | :baz)",
true,
Tdd.is_subtype(tdd_foo_or_bar, tdd_foo_or_bar_or_baz)
)
test.(
"(:foo | :bar | :baz) <: (:foo | :bar)",
false,
Tdd.is_subtype(tdd_foo_or_bar_or_baz, tdd_foo_or_bar)
)
# Test against a non-member of the union
test.("(:foo | :bar) <: :baz", false, Tdd.is_subtype(tdd_foo_or_bar, tdd_baz))
IO.puts("\n--- Intersection related Subtyping ---")
# Should be equivalent to tdd_foo
tdd_atom_and_foo = Tdd.intersect(tdd_atom, tdd_foo)
# Should be tdd_none
tdd_atom_and_tuple = Tdd.intersect(tdd_atom, tdd_tuple)
test.("(atom & :foo) <: :foo", true, Tdd.is_subtype(tdd_atom_and_foo, tdd_foo))
test.(":foo <: (atom & :foo)", true, Tdd.is_subtype(tdd_foo, tdd_atom_and_foo))
test.("(atom & tuple) <: none", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_none))
test.("none <: (atom & tuple)", true, Tdd.is_subtype(tdd_none, tdd_atom_and_tuple))
test.("(atom & :foo) <: :bar", false, Tdd.is_subtype(tdd_atom_and_foo, tdd_bar))
# An intersection is a subtype of its components
test.("(atom & :foo) <: atom", true, Tdd.is_subtype(tdd_atom_and_foo, tdd_atom))
# (none <: atom)
test.("(atom & tuple) <: atom", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_atom))
# (none <: tuple)
test.("(atom & tuple) <: tuple", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_tuple))
IO.puts("\n--- Negation related Subtyping (Contrapositives) ---")
# Reminder: ¬A <: ¬B is equivalent to B <: A (contrapositive)
# Test 1: ¬atom <: ¬:foo (Equivalent to :foo <: atom, which is true)
test.("¬atom <: ¬:foo", true, Tdd.is_subtype(Tdd.negate(tdd_atom), Tdd.negate(tdd_foo)))
# Test 2: ¬:foo <: ¬atom (Equivalent to atom <: :foo, which is false)
test.("¬:foo <: ¬atom", false, Tdd.is_subtype(Tdd.negate(tdd_foo), Tdd.negate(tdd_atom)))
# Double negation
test.("¬(¬:foo) <: :foo", true, Tdd.is_subtype(Tdd.negate(Tdd.negate(tdd_foo)), tdd_foo))
test.(":foo <: ¬(¬:foo)", true, Tdd.is_subtype(tdd_foo, Tdd.negate(Tdd.negate(tdd_foo))))
# Disjoint types
test.("atom <: ¬tuple", true, Tdd.is_subtype(tdd_atom, Tdd.negate(tdd_tuple)))
test.("tuple <: ¬atom", true, Tdd.is_subtype(tdd_tuple, Tdd.negate(tdd_atom)))
test.(":foo <: ¬{}", true, Tdd.is_subtype(tdd_foo, Tdd.negate(tdd_empty_tuple)))
IO.puts("\n--- Mixed Types & Complex Subtyping ---")
tdd_atom_or_tuple = Tdd.sum(tdd_atom, tdd_tuple)
tdd_foo_or_empty_tuple = Tdd.sum(tdd_foo, tdd_empty_tuple)
test.(
"(:foo | {}) <: (atom | tuple)",
true,
Tdd.is_subtype(tdd_foo_or_empty_tuple, tdd_atom_or_tuple)
)
test.(
"(atom | tuple) <: (:foo | {})",
false,
Tdd.is_subtype(tdd_atom_or_tuple, tdd_foo_or_empty_tuple)
)
test.(":foo <: (atom | tuple)", true, Tdd.is_subtype(tdd_foo, tdd_atom_or_tuple))
test.("{} <: (atom | tuple)", true, Tdd.is_subtype(tdd_empty_tuple, tdd_atom_or_tuple))
# De Morgan's Law illustration (A | B = ¬(¬A & ¬B))
# (:foo | :bar) <: ¬(¬:foo & ¬:bar)
tdd_not_foo_and_not_bar = Tdd.intersect(Tdd.negate(tdd_foo), Tdd.negate(tdd_bar))
test.(
"(:foo | :bar) <: ¬(¬:foo & ¬:bar)",
true,
Tdd.is_subtype(tdd_foo_or_bar, Tdd.negate(tdd_not_foo_and_not_bar))
)
test.(
"¬(¬:foo & ¬:bar) <: (:foo | :bar)",
true,
Tdd.is_subtype(Tdd.negate(tdd_not_foo_and_not_bar), tdd_foo_or_bar)
)
# Type difference: atom - :foo (represented as atom & ¬:foo)
tdd_atom_minus_foo = Tdd.intersect(tdd_atom, Tdd.negate(tdd_foo))
test.("(atom - :foo) <: atom", true, Tdd.is_subtype(tdd_atom_minus_foo, tdd_atom))
test.("(atom - :foo) <: :foo", false, Tdd.is_subtype(tdd_atom_minus_foo, tdd_foo))
# True if :bar is in (atom - :foo)
test.("(atom - :foo) <: :bar", false, Tdd.is_subtype(tdd_atom_minus_foo, tdd_bar))
test.(":bar <: (atom - :foo)", true, Tdd.is_subtype(tdd_bar, tdd_atom_minus_foo))
# (atom - :foo) | :foo should be atom
tdd_recombined_atom = Tdd.sum(tdd_atom_minus_foo, tdd_foo)
test.("((atom - :foo) | :foo) <: atom", true, Tdd.is_subtype(tdd_recombined_atom, tdd_atom))
test.("atom <: ((atom - :foo) | :foo)", true, Tdd.is_subtype(tdd_atom, tdd_recombined_atom))
# (atom | {}) & (tuple | :foo) must be (:foo | {})
# Represents `atom() | {}`
tdd_atom_or_empty = Tdd.sum(tdd_atom, tdd_empty_tuple)
# Represents `tuple() | :foo`
tdd_tuple_or_foo = Tdd.sum(tdd_tuple, tdd_foo)
intersected_complex = Tdd.intersect(tdd_atom_or_empty, tdd_tuple_or_foo)
# Expected result for intersected_complex is tdd_foo_or_empty_tuple
test.(
"(atom | {}) & (tuple | :foo) <: (:foo | {})",
true,
Tdd.is_subtype(intersected_complex, tdd_foo_or_empty_tuple)
)
test.(
"(:foo | {}) <: (atom | {}) & (tuple | :foo)",
true,
Tdd.is_subtype(tdd_foo_or_empty_tuple, intersected_complex)
)
# {} | tuple_size_2 should be a subtype of tuple
tdd_empty_or_s2 = Tdd.sum(tdd_empty_tuple, tdd_tuple_s2)
test.("({} | tuple_size_2) <: tuple", true, Tdd.is_subtype(tdd_empty_or_s2, tdd_tuple))
test.(
"({} | tuple_size_2) <: ({} | tuple_size_2)",
true,
Tdd.is_subtype(tdd_empty_or_s2, tdd_empty_or_s2)
)
test.(
"({} | tuple_size_2) <: tuple_size_2",
false,
Tdd.is_subtype(tdd_empty_or_s2, tdd_tuple_s2)
)
# IO.puts("\n--- TDD structure for (atom - :foo) ---")
# Tdd.print_tdd(tdd_atom_minus_foo)
# IO.puts("\n--- TDD structure for ((atom - :foo) | :foo) which should be 'atom' ---")
# Tdd.print_tdd(tdd_recombined_atom)
# IO.puts("\n--- TDD structure for 'atom' for comparison ---")
# Tdd.print_tdd(tdd_atom)
IO.inspect(Process.get(:test_failures, []))
end
defmodule IntegerTests do
def run(test_fn) do
Process.put(:test_failures, [])
# Reset for each test group if needed, or once globally
Tdd.init_tdd_system()
# Integer types
tdd_int = Tdd.type_integer()
tdd_int_5 = Tdd.type_int_eq(5)
tdd_int_7 = Tdd.type_int_eq(7)
# x < 10
tdd_int_lt_10 = Tdd.type_int_lt(10)
# x > 3
tdd_int_gt_3 = Tdd.type_int_gt(3)
# x < 3
tdd_int_lt_3 = Tdd.type_int_lt(3)
# x > 10
tdd_int_gt_10 = Tdd.type_int_gt(10)
tdd_atom_foo = Tdd.type_atom_literal(:foo)
#
# IO.puts("\n--- Integer Type Structures ---")
# IO.puts("Integer:")
# Tdd.print_tdd(tdd_int)
# IO.puts("Int == 5:")
# Tdd.print_tdd(tdd_int_5)
# IO.puts("Int < 10:")
# Tdd.print_tdd(tdd_int_lt_10)
IO.puts("\n--- Integer Subtyping Tests ---")
test_fn.("int_5 <: integer", true, Tdd.is_subtype(tdd_int_5, tdd_int))
test_fn.("integer <: int_5", false, Tdd.is_subtype(tdd_int, tdd_int_5))
test_fn.("int_5 <: int_7", false, Tdd.is_subtype(tdd_int_5, tdd_int_7))
test_fn.("int_5 <: int_5", true, Tdd.is_subtype(tdd_int_5, tdd_int_5))
test_fn.("int_5 <: atom_foo", false, Tdd.is_subtype(tdd_int_5, tdd_atom_foo))
test_fn.("int_lt_10 <: integer", true, Tdd.is_subtype(tdd_int_lt_10, tdd_int))
test_fn.("integer <: int_lt_10", false, Tdd.is_subtype(tdd_int, tdd_int_lt_10))
# 5 < 10
test_fn.("int_5 <: int_lt_10", true, Tdd.is_subtype(tdd_int_5, tdd_int_lt_10))
test_fn.("int_lt_10 <: int_5", false, Tdd.is_subtype(tdd_int_lt_10, tdd_int_5))
test_fn.("int_gt_3 <: integer", true, Tdd.is_subtype(tdd_int_gt_3, tdd_int))
# 5 > 3
test_fn.("int_5 <: int_gt_3", true, Tdd.is_subtype(tdd_int_5, tdd_int_gt_3))
test_fn.("int_gt_3 <: int_5", false, Tdd.is_subtype(tdd_int_gt_3, tdd_int_5))
# x < 3 implies x < 10
test_fn.("int_lt_3 <: int_lt_10", true, Tdd.is_subtype(tdd_int_lt_3, tdd_int_lt_10))
# x > 10 implies x > 3
test_fn.("int_gt_10 <: int_gt_3", true, Tdd.is_subtype(tdd_int_gt_10, tdd_int_gt_3))
test_fn.("int_lt_10 <: int_lt_3", false, Tdd.is_subtype(tdd_int_lt_10, tdd_int_lt_3))
test_fn.("int_gt_3 <: int_gt_10", false, Tdd.is_subtype(tdd_int_gt_3, tdd_int_gt_10))
IO.puts("\n--- Integer Intersection Tests (should resolve to none for contradictions) ---")
intersect_5_7 = Tdd.intersect(tdd_int_5, tdd_int_7)
test_fn.("int_5 & int_7 == none", true, intersect_5_7 == Tdd.type_none())
# IO.puts("Structure of int_5 & int_7 (should be ID 0):")
# Tdd.print_tdd(intersect_5_7)
# x < 3 AND x > 10
intersect_lt3_gt10 = Tdd.intersect(tdd_int_lt_3, tdd_int_gt_10)
test_fn.("int_lt_3 & int_gt_10 == none", true, intersect_lt3_gt10 == Tdd.type_none())
# IO.puts("Structure of int_lt_3 & int_gt_10 (should be ID 0):")
# Tdd.print_tdd(intersect_lt3_gt10)
# x < 10 AND x > 3 (e.g. 4,5..9)
intersect_lt10_gt3 = Tdd.intersect(tdd_int_lt_10, tdd_int_gt_3)
test_fn.("int_lt_10 & int_gt_3 != none", true, intersect_lt10_gt3 != Tdd.type_none())
# IO.puts("Structure of int_lt_10 & int_gt_3 (should be non-empty):")
# Tdd.print_tdd(intersect_lt10_gt3)
# Test a value within this intersection
test_fn.(
"int_5 <: (int_lt_10 & int_gt_3)",
true,
Tdd.is_subtype(tdd_int_5, intersect_lt10_gt3)
)
# x == 5 AND x < 3
intersect_5_lt3 = Tdd.intersect(tdd_int_5, tdd_int_lt_3)
test_fn.("int_5 & int_lt_3 == none", true, intersect_5_lt3 == Tdd.type_none())
IO.puts("\n--- Integer Union Tests ---")
union_5_7 = Tdd.sum(tdd_int_5, tdd_int_7)
test_fn.("int_5 <: (int_5 | int_7)", true, Tdd.is_subtype(tdd_int_5, union_5_7))
test_fn.("int_7 <: (int_5 | int_7)", true, Tdd.is_subtype(tdd_int_7, union_5_7))
test_fn.("int_lt_3 <: (int_5 | int_7)", false, Tdd.is_subtype(tdd_int_lt_3, union_5_7))
# IO.puts("Structure of int_5 | int_7:")
# Tdd.print_tdd(union_5_7)
# (int < 3) | (int > 10)
union_disjoint_ranges = Tdd.sum(tdd_int_lt_3, tdd_int_gt_10)
test_fn.(
"int_eq(0) <: (int < 3 | int > 10)",
true,
Tdd.is_subtype(Tdd.type_int_eq(0), union_disjoint_ranges)
)
test_fn.(
"int_eq(5) <: (int < 3 | int > 10)",
false,
Tdd.is_subtype(Tdd.type_int_eq(5), union_disjoint_ranges)
)
test_fn.(
"int_eq(12) <: (int < 3 | int > 10)",
true,
Tdd.is_subtype(Tdd.type_int_eq(12), union_disjoint_ranges)
)
IO.inspect(Process.get(:test_failures, []))
end
end
defmodule TupleTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
# Re-init the system for a clean slate for these tests
Tdd.init_tdd_system()
IO.puts("\n--- Running TupleTests ---")
# --- Basic Types for convenience ---
t_atom = type_atom()
t_int = type_integer()
t_foo = type_atom_literal(:foo)
t_bar = type_atom_literal(:bar)
t_int_5 = type_int_eq(5)
t_int_6 = type_int_eq(6)
t_int_pos = type_int_gt(0)
t_any = type_any()
t_none = type_none()
# any tuple
t_tuple = type_tuple()
t_empty_tuple = type_empty_tuple()
# --- Specific Tuple Types ---
# {atom(), integer()}
tup_atom_int = type_tuple([t_atom, t_int])
# {:foo, 5}
tup_foo_5 = type_tuple([t_foo, t_int_5])
# {pos_integer(), atom()}
tup_pos_atom = type_tuple([t_int_pos, t_atom])
# {atom(), any}
tup_atom_any = type_tuple([t_atom, t_any])
# {any, integer()}
tup_any_int = type_tuple([t_any, t_int])
# a tuple of size 2, {any, any}
tup_s2_any = type_tuple_sized_any(2)
# a tuple of size 3, {any, any, any}
tup_s3_any = type_tuple_sized_any(3)
# {integer(), atom()}
tup_int_atom = type_tuple([t_int, t_atom])
# {{:foo}}
tup_nested_foo = type_tuple([type_tuple([t_foo])])
# {{atom()}}
tup_nested_atom = type_tuple([type_tuple([t_atom])])
# {any, none} -> this should resolve to none
tup_with_none = type_tuple([t_any, t_none])
IO.puts("\n--- Section: Basic Subtyping ---")
test_fn.("{:foo, 5} <: {atom, int}", true, is_subtype(tup_foo_5, tup_atom_int))
test_fn.("{atom, int} <: {:foo, 5}", false, is_subtype(tup_atom_int, tup_foo_5))
test_fn.("{:foo, 5} <: {pos_int, atom}", false, is_subtype(tup_foo_5, tup_pos_atom))
test_fn.("{pos_int, atom} <: {atom, int}", false, is_subtype(tup_pos_atom, tup_atom_int))
test_fn.("{atom, int} <: tuple()", true, is_subtype(tup_atom_int, t_tuple))
test_fn.("tuple() <: {atom, int}", false, is_subtype(t_tuple, tup_atom_int))
IO.puts("\n--- Section: Size-related Subtyping ---")
test_fn.("{atom, int} <: tuple_size_2_any", true, is_subtype(tup_atom_int, tup_s2_any))
test_fn.("tuple_size_2_any <: {atom, int}", false, is_subtype(tup_s2_any, tup_atom_int))
test_fn.("{atom, int} <: tuple_size_3_any", false, is_subtype(tup_atom_int, tup_s3_any))
test_fn.("tuple_size_2_any <: tuple_size_3_any", false, is_subtype(tup_s2_any, tup_s3_any))
test_fn.("{} <: tuple()", true, is_subtype(t_empty_tuple, t_tuple))
test_fn.("{} <: tuple_size_2_any", false, is_subtype(t_empty_tuple, tup_s2_any))
IO.puts("\n--- Section: Intersection ---")
# {atom, any} & {any, int} -> {atom, int}
intersect1 = intersect(tup_atom_any, tup_any_int)
test_fn.("({atom,any} & {any,int}) == {atom,int}", true, intersect1 == tup_atom_int)
# {atom, int} & {int, atom} -> none
intersect2 = intersect(tup_atom_int, tup_int_atom)
test_fn.("({atom,int} & {int,atom}) == none", true, intersect2 == t_none)
# tuple_size_2 & tuple_size_3 -> none
intersect3 = intersect(tup_s2_any, tup_s3_any)
test_fn.("(tuple_size_2 & tuple_size_3) == none", true, intersect3 == t_none)
# tuple() & {atom, int} -> {atom, int}
intersect4 = intersect(t_tuple, tup_atom_int)
test_fn.("(tuple() & {atom,int}) == {atom,int}", true, intersect4 == tup_atom_int)
IO.puts("\n--- Section: Union ---")
# {:foo, 5} | {pos_int, atom}
union1 = sum(tup_foo_5, tup_pos_atom)
test_fn.("{:foo, 5} <: ({:foo, 5} | {pos_int, atom})", true, is_subtype(tup_foo_5, union1))
test_fn.(
"{pos_int, atom} <: ({:foo, 5} | {pos_int, atom})",
true,
is_subtype(tup_pos_atom, union1)
)
test_fn.(
"{atom, int} <: ({:foo, 5} | {pos_int, atom})",
false,
is_subtype(tup_atom_int, union1)
)
# {atom, any} | {any, int} -> a complex type, let's check subtyping against it
union2 = sum(tup_atom_any, tup_any_int)
# {atom, int} is in both parts of the union.
test_fn.("{atom, int} <: ({atom,any} | {any,int})", true, is_subtype(tup_atom_int, union2))
# {:foo, :bar} is only in {atom, any}.
test_fn.(
"{:foo, :bar} <: ({atom,any} | {any,int})",
true,
is_subtype(type_tuple([t_foo, t_bar]), union2)
)
# {5, 6} is only in {any, int}.
test_fn.(
"{5, 6} <: ({atom,any} | {any,int})",
true,
is_subtype(type_tuple([t_int_5, t_int_6]), union2)
)
# {5, :foo} is in neither part of the union.
test_fn.(
"{5, :foo} <: ({atom,any} | {any,int})",
false,
is_subtype(type_tuple([t_int_5, t_foo]), union2)
)
IO.puts("\n--- Section: Negation and Type Difference ---")
# atom is disjoint from tuple, so atom <: ¬tuple
test_fn.("atom <: ¬tuple", true, is_subtype(t_atom, negate(t_tuple)))
# A specific tuple should not be a subtype of the negation of a more general tuple type it belongs to
test_fn.("{atom, int} <: ¬tuple()", false, is_subtype(tup_atom_int, negate(t_tuple)))
# {int, atom} is a subtype of ¬{atom, int} because their elements differ
test_fn.("{int, atom} <: ¬{atom, int}", true, is_subtype(tup_int_atom, negate(tup_atom_int)))
# tuple_size_3 is a subtype of ¬tuple_size_2 because their sizes differ
test_fn.("tuple_size_3 <: ¬tuple_size_2", true, is_subtype(tup_s3_any, negate(tup_s2_any)))
# Type difference: tuple_size_2 - {atom, any} -> should be {¬atom, any} for size 2 tuples.
diff1 = intersect(tup_s2_any, negate(tup_atom_any))
# {integer, integer} has a first element that is not an atom, so it should be in the difference.
tup_int_int = type_tuple([t_int, t_int])
test_fn.("{int, int} <: (tuple_size_2 - {atom, any})", true, is_subtype(tup_int_int, diff1))
test_fn.(
"{atom, int} <: (tuple_size_2 - {atom, any})",
false,
is_subtype(tup_atom_int, diff1)
)
IO.puts("\n--- Section: Nested Tuples ---")
test_fn.("{{:foo}} <: {{atom}}", true, is_subtype(tup_nested_foo, tup_nested_atom))
test_fn.("{{atom}} <: {{:foo}}", false, is_subtype(tup_nested_atom, tup_nested_foo))
# Intersection of disjoint nested types: {{:foo}} & {{:bar}}
intersect_nested = intersect(tup_nested_foo, type_tuple([type_tuple([t_bar])]))
test_fn.("{{:foo}} & {{:bar}} == none", true, intersect_nested == t_none)
# Union of nested types
union_nested = sum(tup_nested_foo, type_tuple([type_tuple([t_bar])]))
test_fn.("{{:foo}} <: ({{:foo}} | {{:bar}})", true, is_subtype(tup_nested_foo, union_nested))
test_fn.(
"{{:bar}} <: ({{:foo}} | {{:bar}})",
true,
is_subtype(type_tuple([type_tuple([t_bar])]), union_nested)
)
test_fn.(
"{{atom}} <: ({{:foo}} | {{:bar}})",
false,
is_subtype(tup_nested_atom, union_nested)
)
IO.puts("\n--- Section: Edge Cases (any, none) ---")
# A type `{any, none}` should not be possible. The value `none` cannot exist.
# The simplification logic should reduce this to `type_none`.
test_fn.("{any, none} == none", true, tup_with_none == t_none)
# Intersection with a tuple containing none should result in none
intersect_with_none_tuple = intersect(tup_atom_int, tup_with_none)
test_fn.("{atom,int} & {any,none} == none", true, intersect_with_none_tuple == t_none)
# Union with a tuple containing none should be a no-op
union_with_none_tuple = sum(tup_atom_int, tup_with_none)
test_fn.("{atom,int} | {any,none} == {atom,int}", true, union_with_none_tuple == tup_atom_int)
# --- Original tests from problem description for regression ---
IO.puts("\n--- Specific Tuple Subtyping Test (Original) ---")
test_fn.(
"{1, :foo} <: {int_gt_0, :foo | :bar}",
true,
is_subtype(
type_tuple([type_int_eq(1), type_atom_literal(:foo)]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
test_fn.(
"{0, :foo} <: {int_gt_0, :foo | :bar}",
false,
is_subtype(
type_tuple([type_int_eq(0), type_atom_literal(:foo)]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
test_fn.(
"{1, :kek} <: {int_gt_0, :foo | :bar}",
false,
is_subtype(
type_tuple([
type_int_eq(1),
type_atom_literal(:kek)
]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
IO.inspect(Process.get(:test_failures, []), label: "TupleTests failures")
end
end
defmodule ListTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
Tdd.init_tdd_system()
IO.puts("\n--- Running ListTests ---")
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_foo = type_atom_literal(:foo)
t_bar = type_atom_literal(:bar)
t_any = type_any()
t_none = type_none()
# --- List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [atom | list]
t_cons_atom_list = type_cons(t_atom, t_list)
# [:foo | []]
t_cons_foo_empty = type_cons(t_foo, t_empty_list)
# [atom | []]
t_cons_atom_empty = type_cons(t_atom, t_empty_list)
# [any | []]
t_cons_any_empty = type_cons(t_any, t_empty_list)
# [integer | list]
t_cons_int_list = type_cons(t_int, t_list)
IO.puts("\n--- Section: Basic List Subtyping ---")
test_fn.("[] <: list", true, is_subtype(t_empty_list, t_list))
test_fn.("list <: []", false, is_subtype(t_list, t_empty_list))
test_fn.("[atom|list] <: list", true, is_subtype(t_cons_atom_list, t_list))
test_fn.("list <: [atom|list]", false, is_subtype(t_list, t_cons_atom_list))
test_fn.("[] <: [atom|list]", false, is_subtype(t_empty_list, t_cons_atom_list))
test_fn.("[atom|list] <: []", false, is_subtype(t_cons_atom_list, t_empty_list))
test_fn.("list <: atom", false, is_subtype(t_list, t_atom))
test_fn.("atom <: list", false, is_subtype(t_atom, t_list))
IO.puts("\n--- Section: Cons Subtyping (Covariance) ---")
# Head is a subtype
test_fn.("[:foo|[]] <: [atom|[]]", true, is_subtype(t_cons_foo_empty, t_cons_atom_empty))
test_fn.("[atom|[]] <: [:foo|[]]", false, is_subtype(t_cons_atom_empty, t_cons_foo_empty))
# Tail is a subtype
test_fn.("[atom|[]] <: [atom|list]", true, is_subtype(t_cons_atom_empty, t_cons_atom_list))
test_fn.("[atom|list] <: [atom|[]]", false, is_subtype(t_cons_atom_list, t_cons_atom_empty))
# Both are subtypes
test_fn.("[:foo|[]] <: [atom|list]", true, is_subtype(t_cons_foo_empty, t_cons_atom_list))
# Neither is a subtype
test_fn.("[atom|list] <: [:foo|[]]", false, is_subtype(t_cons_atom_list, t_cons_foo_empty))
# A list of length 1 is a subtype of a list of any element of length 1
test_fn.("[atom|[]] <: [any|[]]", true, is_subtype(t_cons_atom_empty, t_cons_any_empty))
IO.puts("\n--- Section: List Intersection ---")
# [atom|list] & [integer|list] -> should be none due to head conflict
intersect1 = intersect(t_cons_atom_list, t_cons_int_list)
test_fn.("[atom|list] & [integer|list] == none", true, intersect1 == t_none)
# [any|[]] & [atom|list] -> should be [atom|[]]
intersect2 = intersect(t_cons_any_empty, t_cons_atom_list)
test_fn.("([any|[]] & [atom|list]) == [atom|[]]", true, intersect2 == t_cons_atom_empty)
# [] & [atom|list] -> should be none because one is empty and one is not
intersect3 = intersect(t_empty_list, t_cons_atom_list)
test_fn.("[] & [atom|list] == none", true, intersect3 == t_none)
IO.puts("\n--- Section: List Union ---")
# [] | [atom|[]]
union1 = sum(t_empty_list, t_cons_atom_empty)
test_fn.("[] <: ([] | [atom|[]])", true, is_subtype(t_empty_list, union1))
test_fn.("[atom|[]] <: ([] | [atom|[]])", true, is_subtype(t_cons_atom_empty, union1))
test_fn.(
"[integer|[]] <: ([] | [atom|[]])",
false,
is_subtype(type_cons(t_int, t_empty_list), union1)
)
# [:foo|[]] | [:bar|[]]
union2 = sum(t_cons_foo_empty, type_cons(t_bar, t_empty_list))
# This union is a subtype of [atom|[]]
test_fn.("([:foo|[]] | [:bar|[]]) <: [atom|[]]", true, is_subtype(union2, t_cons_atom_empty))
test_fn.("[atom|[]] <: ([:foo|[]] | [:bar|[]])", false, is_subtype(t_cons_atom_empty, union2))
IO.puts("\n--- Section: List Negation ---")
# list is a subtype of not(atom)
test_fn.("list <: ¬atom", true, is_subtype(t_list, negate(t_atom)))
# A non-empty list is a subtype of not an empty list
test_fn.("[atom|list] <: ¬[]", true, is_subtype(t_cons_atom_list, negate(t_empty_list)))
# [integer|list] is a subtype of not [atom|list]
test_fn.(
"[integer|list] <: ¬[atom|list]",
true,
is_subtype(t_cons_int_list, negate(t_cons_atom_list))
)
IO.inspect(Process.get(:test_failures, []), label: "ListTests failures")
end
end
defmodule ListOfTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
Tdd.init_tdd_system()
IO.puts("\n--- Running ListOfTests ---")
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_pos_int = type_int_gt(0)
t_int_5 = type_int_eq(5)
# --- list(X) Types ---
t_list_of_int = type_list_of(t_int)
t_list_of_pos_int = type_list_of(t_pos_int)
t_list_of_atom = type_list_of(t_atom)
# --- Specific List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [5]
t_list_one_int = type_cons(t_int_5, t_empty_list)
# [:foo]
t_list_one_atom = type_cons(type_atom_literal(:foo), t_empty_list)
# [5, :foo]
t_list_int_and_atom = type_cons(t_int_5, type_cons(type_atom_literal(:foo), t_empty_list))
IO.puts("\n--- Section: Basic list(X) Subtyping ---")
test_fn.("list(integer) <: list()", true, is_subtype(t_list_of_int, t_list))
test_fn.("list() <: list(integer)", false, is_subtype(t_list, t_list_of_int))
test_fn.("[] <: list(integer)", true, is_subtype(t_empty_list, t_list_of_int))
test_fn.("[5] <: list(integer)", true, is_subtype(t_list_one_int, t_list_of_int))
test_fn.("[:foo] <: list(integer)", false, is_subtype(t_list_one_atom, t_list_of_int))
test_fn.("[5, :foo] <: list(integer)", false, is_subtype(t_list_int_and_atom, t_list_of_int))
test_fn.(
"[5, :foo] <: list(any)",
true,
is_subtype(t_list_int_and_atom, type_list_of(type_any()))
)
IO.puts("\n--- Section: Covariance of list(X) ---")
test_fn.(
"list(pos_integer) <: list(integer)",
true,
is_subtype(t_list_of_pos_int, t_list_of_int)
)
test_fn.(
"list(integer) <: list(pos_integer)",
false,
is_subtype(t_list_of_int, t_list_of_pos_int)
)
IO.puts("\n--- Section: Intersection of list(X) ---")
# list(integer) & list(pos_integer) should be list(pos_integer)
intersect1 = intersect(t_list_of_int, t_list_of_pos_int)
test_fn.(
"(list(int) & list(pos_int)) == list(pos_int)",
true,
intersect1 == t_list_of_pos_int
)
# list(integer) & list(atom) should be just [] (empty list is the only common member)
# The system simplifies this intersection to a type that only accepts the empty list.
intersect2 = intersect(t_list_of_int, t_list_of_atom)
test_fn.("[] <: (list(int) & list(atom))", true, is_subtype(t_empty_list, intersect2))
test_fn.("[5] <: (list(int) & list(atom))", false, is_subtype(t_list_one_int, intersect2))
test_fn.("[:foo] <: (list(int) & list(atom))", false, is_subtype(t_list_one_atom, intersect2))
# It should be equivalent to `type_empty_list`
test_fn.("(list(int) & list(atom)) == []", true, intersect2 == t_empty_list)
IO.puts("\n--- Section: Intersection of list(X) with cons ---")
# list(integer) & [:foo | []] -> should be none
intersect3 = intersect(t_list_of_int, t_list_one_atom)
test_fn.("list(integer) & [:foo] == none", true, intersect3 == type_none())
# list(integer) & [5 | []] -> should be [5 | []]
intersect4 = intersect(t_list_of_int, t_list_one_int)
test_fn.("list(integer) & [5] == [5]", true, intersect4 == t_list_one_int)
# list(integer) & [5, :foo] -> should be none
intersect5 = intersect(t_list_of_int, t_list_int_and_atom)
test_fn.("list(integer) & [5, :foo] == none", true, intersect5 == type_none())
IO.inspect(Process.get(:test_failures, []), label: "ListOfTests failures")
end
end
defmodule AdhocTest do
import Tdd
def run(test_fn) do
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_pos_int = type_int_gt(0)
t_int_5 = type_int_eq(5)
# --- list(X) Types ---
t_list_of_int = type_list_of(t_int)
t_list_of_pos_int = type_list_of(t_pos_int)
t_list_of_atom = type_list_of(t_atom)
# --- Specific List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [5]
t_list_one_int = type_cons(t_int_5, t_empty_list)
# [:foo]
t_list_one_atom = type_cons(type_atom_literal(:foo), t_empty_list)
# [5, :foo]
t_list_int_and_atom = type_cons(t_int_5, type_cons(type_atom_literal(:foo), t_empty_list))
intersect4 = intersect(t_list_of_int, t_list_one_int)
IO.inspect("first_subtype")
a = is_subtype(intersect4, t_list_one_int)
IO.inspect("second_subtype")
b = is_subtype(t_list_one_int, intersect4)
test_fn.("list(integer) & [5] == [5]", true, a == b)
end
end
test_all.()