Page 624 - AI for Good Innovate for Impact
P. 624
AI for Good Innovate for Impact
• REQ-10: It is of added value that the system delivers dynamic budget rebalancing to align
user spending with predefined financial goals.
3�1 Future Work
• [Describe planned enhancements, developments, or next steps.] Regarding our AI engine,
continuous improvement is needed to train the model with the latest financial product
offerings, tools and innovations as well as users’ input [6]. However, as suggested, we will
fine-tune our engine for achieving further financial goals such as Emergency Fund, Asset
Building or Passive Income Accumulation per our financial freedom roadmap. Each step
requires SERA to be trained with entirely different datasets regarding its objective. For
example, Emergency Fund initiatives can be different for each user as it can be internal
(use savings from my paycheck or spending to create my own emergency fund) or external
(use a portion of my fund to offset other people’s debt in exchange for % of return), or
combination of both. Similarly, for Asset Building or Passive Income Accumulation, it
needs datasets for investment vehicles, asset types, risk tolerance, time frame constraint,
and income type (asset protection/building vs income producing), etc.
• [Outline additional resources needed, such as technology upgrades, further data
collection, model improvements, etc.] Here is our detailed write-up for needed resources:
Model Layer
• Our plan is to utilize PyTorch mainly for all our Machine Learning and AI needs
• Base LLM Models:
• Mixtral or LLaMA for context handling
• Specialized models: TimeLLM, T5, Longformer (for large document/time modeling).
• Behavior Prediction Models:
• Train smaller classifiers (XGBoost, LightGBM) for quick pattern detection.
• Use Hybrid LLM + Classical ML when speed is needed.
• Use La Plateforme for finetuning needs
Data Layer
• Sequential Data Pipelines:
• Preprocess transaction streams into chronological sequences.
• Windowing: Break sequences into 1-week, 1-month, 3-month snapshots.
• Behavioral Embedding Store:
• Vector DB for storing user behavioral profiles.
• Compare "current self" vs "historical self" for anomaly scoring.
Infrastructure Layer
• Model Training Compute:
• Heavier GPU (A100 or H100) for fine-tuning on long-context data.
• Use cloud like AWS Sagemaker, Hugging Face, or local clusters if needed.
• Serving and Detection Engines:
• Real-time ingestion (Kafka, PubSub) for new transaction streams.
• Microservices using FastAPI or Node.js for fast predictions.
• Behavior Monitoring Platform:
• Dashboards (Superset, Metabase) to visualize emerging patterns across users.
• Alerting pipelines (PagerDuty, Slack) if massive pattern shifts occur.
588

