Skip to main contentSkip to navigation
Computer Science Algorithms and Interactive Media<!-- --> | <!-- -->Ryan Lindsey - Personal Website

Computer Science Algorithms and Interactive Media

5 min read
Computer ScienceAlgorithms#machine-learning#dynamic-programming#complexity-analysis

Algorithm Analysis and Visual Computing

Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem.

Big O Notation

Lorem ipsum dolor sit amet, consectetur adipiscing elit. The time complexity of binary search is:

T(n)=O(logn)T(n) = O(\log n)

While the space complexity of merge sort can be expressed as:

S(n)=O(n)S(n) = O(n)

Algorithm Complexity Chart

Graph Theory

Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam. The adjacency matrix representation:

The adjacency matrix is defined as: Aij=1A_{ij} = 1 if there is an edge from vertex ii to vertex jj, and Aij=0A_{ij} = 0 otherwise.

Nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur.

Graph Traversal Animation

Dynamic Programming

Vel illum qui dolorem eum fugiat quo voluptas nulla pariatur? The Fibonacci sequence can be computed efficiently using dynamic programming:

The Fibonacci sequence is defined recursively as:

  • F(0)=0F(0) = 0
  • F(1)=1F(1) = 1
  • F(n)=F(n1)+F(n2)F(n) = F(n-1) + F(n-2) for n>1n > 1
// Dynamic programming solution for Fibonacci sequence
use std::collections::HashMap;

struct FibonacciCalculator {
    memo: HashMap<u64, u64>,
}

impl FibonacciCalculator {
    fn new() -> Self {
        let mut memo = HashMap::new();
        memo.insert(0, 0);
        memo.insert(1, 1);
        Self { memo }
    }
    
    fn fibonacci(&mut self, n: u64) -> u64 {
        if let Some(&result) = self.memo.get(&n) {
            return result;
        }
        
        let result = self.fibonacci(n - 1) + self.fibonacci(n - 2);
        self.memo.insert(n, result);
        result
    }
    
    fn fibonacci_sequence(&mut self, count: usize) -> Vec<u64> {
        (0..count as u64).map(|i| self.fibonacci(i)).collect()
    }
}

fn main() {
    let mut calc = FibonacciCalculator::new();
    let sequence = calc.fibonacci_sequence(20);
    
    println!("First 20 Fibonacci numbers:");
    for (i, &fib) in sequence.iter().enumerate() {
        println!("F({}) = {}", i, fib);
    }
    
    // Time complexity analysis
    let start = std::time::Instant::now();
    let large_fib = calc.fibonacci(100);
    let duration = start.elapsed();
    
    println!("\nF(100) = {} (computed in {:?})", large_fib, duration);
}

Machine Learning

At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti. The gradient descent update rule:

θj:=θjαθjJ(θ)\theta_{j} := \theta_{j} - \alpha \frac{\partial}{\partial \theta_{j}} J(\theta)

Where α\alpha is the learning rate and J(θ)J(\theta) is the cost function.

Neural Network Diagram

Computational Complexity

Atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident. The P vs NP problem remains one of the most important open questions in computer science.

Data Structure Operations

Similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga.

Sorting Algorithms

Et harum quidem rerum facilis est et expedita distinctio. The average case time complexity of quicksort:

Tavg(n)=O(nlogn)T_{avg}(n) = O(n \log n)

However, the worst case is:

Tworst(n)=O(n2)T_{worst}(n) = O(n^2)

// Quicksort implementation with complexity analysis
package main

import (
    "fmt"
    "math/rand"
    "time"
)

type SortAnalyzer struct {
    comparisons int64
    swaps       int64
}

func (sa *SortAnalyzer) quickSort(arr []int, low, high int) {
    if low < high {
        pi := sa.partition(arr, low, high)
        sa.quickSort(arr, low, pi-1)
        sa.quickSort(arr, pi+1, high)
    }
}

func (sa *SortAnalyzer) partition(arr []int, low, high int) int {
    pivot := arr[high]
    i := low - 1
    
    for j := low; j < high; j++ {
        sa.comparisons++
        if arr[j] < pivot {
            i++
            arr[i], arr[j] = arr[j], arr[i]
            sa.swaps++
        }
    }
    
    arr[i+1], arr[high] = arr[high], arr[i+1]
    sa.swaps++
    return i + 1
}

func generateRandomArray(size int) []int {
    arr := make([]int, size)
    rand.Seed(time.Now().UnixNano())
    
    for i := 0; i < size; i++ {
        arr[i] = rand.Intn(1000)
    }
    
    return arr
}

func analyzeComplexity(size int) {
    arr := generateRandomArray(size)
    analyzer := &SortAnalyzer{}
    
    start := time.Now()
    analyzer.quickSort(arr, 0, len(arr)-1)
    duration := time.Since(start)
    
    fmt.Printf("Array size: %d\n", size)
    fmt.Printf("Time: %v\n", duration)
    fmt.Printf("Comparisons: %d\n", analyzer.comparisons)
    fmt.Printf("Swaps: %d\n", analyzer.swaps)
    fmt.Printf("Theoretical O(n log n): %.2f\n", float64(size)*math.Log2(float64(size)))
    fmt.Println("---")
}

func main() {
    sizes := []int{100, 1000, 10000, 100000}
    
    fmt.Println("Quicksort Complexity Analysis:")
    for _, size := range sizes {
        analyzeComplexity(size)
    }
}

Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est.

Machine Learning Implementation

Here's a simple neural network implementation for gradient descent:

import numpy as np
import matplotlib.pyplot as plt

class SimpleNeuralNetwork:
    def __init__(self, input_size, hidden_size, output_size, learning_rate=0.01):
        # Initialize weights with small random values
        self.W1 = np.random.randn(input_size, hidden_size) * 0.1
        self.b1 = np.zeros((1, hidden_size))
        self.W2 = np.random.randn(hidden_size, output_size) * 0.1
        self.b2 = np.zeros((1, output_size))
        
        self.learning_rate = learning_rate
        self.loss_history = []
    
    def sigmoid(self, x):
        return 1 / (1 + np.exp(-np.clip(x, -500, 500)))
    
    def sigmoid_derivative(self, x):
        return x * (1 - x)
    
    def forward(self, X):
        self.z1 = np.dot(X, self.W1) + self.b1
        self.a1 = self.sigmoid(self.z1)
        self.z2 = np.dot(self.a1, self.W2) + self.b2
        self.a2 = self.sigmoid(self.z2)
        return self.a2
    
    def backward(self, X, y, output):
        m = X.shape[0]
        
        # Calculate gradients
        dZ2 = output - y
        dW2 = (1/m) * np.dot(self.a1.T, dZ2)
        db2 = (1/m) * np.sum(dZ2, axis=0, keepdims=True)
        
        dA1 = np.dot(dZ2, self.W2.T)
        dZ1 = dA1 * self.sigmoid_derivative(self.a1)
        dW1 = (1/m) * np.dot(X.T, dZ1)
        db1 = (1/m) * np.sum(dZ1, axis=0, keepdims=True)
        
        # Update parameters
        self.W2 -= self.learning_rate * dW2
        self.b2 -= self.learning_rate * db2
        self.W1 -= self.learning_rate * dW1
        self.b1 -= self.learning_rate * db1
    
    def train(self, X, y, epochs=1000):
        for epoch in range(epochs):
            # Forward propagation
            output = self.forward(X)
            
            # Calculate loss
            loss = np.mean((output - y) ** 2)
            self.loss_history.append(loss)
            
            # Backward propagation
            self.backward(X, y, output)
            
            if epoch % 100 == 0:
                print(f'Epoch {epoch}, Loss: {loss:.6f}')
    
    def predict(self, X):
        return self.forward(X)

# Example usage: XOR problem
if __name__ == "__main__":
    # XOR dataset
    X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
    y = np.array([[0], [1], [1], [0]])
    
    # Create and train network
    nn = SimpleNeuralNetwork(2, 4, 1, learning_rate=0.1)
    nn.train(X, y, epochs=5000)
    
    # Test predictions
    predictions = nn.predict(X)
    print("\nPredictions:")
    for i, pred in enumerate(predictions):
        print(f"Input: {X[i]}, Target: {y[i][0]}, Prediction: {pred[0]:.4f}")