Architecting a Robust Trading Bot in Python
A comprehensive code walkthrough of a modular, Flask-controlled trading engine for Zerodha
Download source code using the button at the end of this article!
# file path: src/main.py
import os
import logging
from flask import Flask
from config.Config import getBrokerAppConfig, getServerConfig, getSystemConfig
from restapis.HomeAPI import HomeAPI
from restapis.BrokerLoginAPI import BrokerLoginAPI
from restapis.StartAlgoAPI import StartAlgoAPI
from restapis.PositionsAPI import PositionsAPI
from restapis.HoldingsAPI import HoldingsAPIFor bootstrapping the trading runtime, the file brings in os so the launcher can read environment variables and perform basic path or process operations at startup, and logging so the entrypoint can configure and emit the runtime diagnostics and audit trail needed during strategy execution. It imports Flask because the entrypoint creates the web application that exposes the control and status REST surface used to interact with the engine. The three configuration accessors getBrokerAppConfig, getServerConfig and getSystemConfig are pulled from Config so the entrypoint can load broker-specific credentials and mappings, server binding and runtime HTTP behavior, and global system flags before initializing services and the strategy. The REST handler classes HomeAPI, BrokerLoginAPI, StartAlgoAPI, PositionsAPI and HoldingsAPI are imported so the Flask app can wire up the control endpoints that let an operator land on the home page, perform broker login, kick off BaseStrategy.run, and query live positions and holdings. This follows the project pattern of modules importing logging and configuration helpers; compared with other modules that import flask.views.MethodView and core.Controller to implement view logic or modules that pair getBrokerAppConfig with broker-specific login classes, the entrypoint focuses on creating the Flask app and wiring in prebuilt API handlers while also loading server and system configuration because it coordinates both the HTTP control surface and the runtime environment.
# file path: src/main.py
app = Flask(__name__)This line creates the Flask application object named app using the Flask class so the entrypoint can expose an HTTP surface for runtime monitoring and control; in the context of the entrypoint that bootstraps server and broker configuration and then calls BaseStrategy.run, app becomes the WSGI app and request dispatcher that will host health, metrics, and control endpoints the trading runtime relies on. Later lines interact with this same app to set runtime options via app.config, to wire endpoints like HomeAPI with add_url_rule, and to start the server with app.run, so this instantiation is the foundational piece that makes those web-facing integrations possible while the strategy and execution components are running.
# file path: src/main.py
app.config[’DEBUG’] = TrueAfter the Flask application object is created earlier, the code sets the Flask application’s DEBUG configuration flag to True, which tells the Flask runtime to run in debug mode for this process. In practical terms this enables the development-time conveniences that Flask exposes: more verbose exception tracebacks and the automatic reloader so changes to application code restart the server without a manual restart. Within the entrypoint’s bootstrapping sequence this is one of several environment-configuration steps—alongside initializing logging with initLoggingConfg and emitting the serverConfig via logging.info—that prepare the runtime so the rest of the startup (loading broker and server configurations and eventually invoking BaseStrategy.run) runs with development-friendly behavior.
# file path: src/main.py
app.add_url_rule(”/”, view_func=HomeAPI.as_view(”home_api”))When the entrypoint finishes setting up the runtime it registers an HTTP route on the Flask application by wiring the base URL to the HomeAPI class-based view so that requests to the server root are handled by HomeAPI; the view is exposed under the logical name home_api so other parts of the system or external tools can reference it. This is the same registration pattern used elsewhere in the file—add_url_rule binds a path to a class-based view produced via as_view—however this particular registration targets the base ‘/’ endpoint (typically used for a landing page or health/status check), whereas the similar registrations for HoldingsAPI, PositionsAPI and StartAlgoAPI expose specific runtime and control endpoints for holdings, positions and starting the algorithm respectively.
# file path: src/main.py
app.add_url_rule(”/apis/broker/login/zerodha”, view_func=BrokerLoginAPI.as_view(”broker_login_api”))During application bootstrap the entrypoint wires up an HTTP route that exposes the Zerodha broker login flow: it registers the /apis/broker/login/zerodha path so incoming requests for broker authentication are dispatched to the BrokerLoginAPI class-based view (registered under the logical name broker_login_api). This follows the same routing pattern used for other endpoints like HoldingsAPI, PositionsAPI, and HomeAPI, where add_url_rule maps paths to view classes created via as_view; the difference here is that BrokerLoginAPI is responsible for initiating and handling the Zerodha-specific authentication sequence so the broker adaptor can obtain or refresh credentials that the connectivity and execution layers need before live trading begins. The route is set up during the same server bootstrapping phase when logging, configuration, and other API endpoints are initialized, ensuring the UI or automated components can trigger broker login as part of startup or runtime operation.
# file path: src/main.py
app.add_url_rule(”/apis/algo/start”, view_func=StartAlgoAPI.as_view(”start_algo_api”))As the entrypoint exposes HTTP controls to the runtime, this line registers a route that lets external clients trigger the algo engine to start: it maps the URL for starting an algorithm to the StartAlgoAPI view handler and gives that handler the logical name start_algo_api. In the flow of the application this endpoint is the operator-facing control that transitions the system from initialized to active by invoking the strategy execution path (eventually calling BaseStrategy.run) so the strategy can subscribe to market data, evaluate signals, and submit orders through the broker adapter. This follows the same registration pattern used for the other endpoints in the file (HomeAPI, PositionsAPI, HoldingsAPI), but unlike those read-oriented endpoints, the StartAlgoAPI performs a state-changing orchestration to kick off live trading.
# file path: src/main.py
app.add_url_rule(”/positions”, view_func=PositionsAPI.as_view(”positions_api”))The line registers an HTTP route on the Flask application that exposes the runtime’s current trade positions via the PositionsAPI class-based view. During startup this wiring converts PositionsAPI into a callable endpoint by using its as_view factory and binds that handler into the app’s routing table so external consumers—like the web dashboard, monitoring tools, or internal REST clients—can request the live positions while the strategy is running. This follows the same registration pattern used for HoldingsAPI and HomeAPI (and for BrokerLoginAPI on the broker login route): each call turns a class-based view into a handler and maps a specific path and endpoint name into the server during bootstrap; the PositionsAPI registration specifically provides access to the active positions state rather than holdings, the home page, or broker login flows.
# file path: src/main.py
app.add_url_rule(”/holdings”, view_func=HoldingsAPI.as_view(”holdings_api”))During startup the entrypoint registers an HTTP route that exposes current portfolio holdings to external callers by wiring the /holdings path to the HoldingsAPI class-based view. Concretely, the app creates a callable from HoldingsAPI and attaches it under an endpoint name so the server can route incoming requests to that view; the view in turn will fetch and serialize the live holdings data from the trade lifecycle layer or broker adaptor so dashboards and monitoring tools can query the running system. This follows the same registration pattern used for the positions endpoint (PositionsAPI), the root/home endpoint (HomeAPI), and the broker login endpoint (BrokerLoginAPI) — the difference is purely the route and the domain responsibility: holdings returns aggregated owned instruments and cash balances, positions returns active trade positions, home serves the base UI/health, and broker login handles authentication. The route is set up during bootstrap so the API surfaces are available by the time the strategy is started (BaseStrategy.run), enabling real-time observability and control while the trading runtime executes.
# file path: src/main.py
serverConfig = getServerConfig()The entrypoint stores the runtime configuration into the serverConfig variable by calling getServerConfig so the process has a single, concrete set of server settings to drive initialization. getServerConfig performs file I/O: it reads the server.json file from the project’s config area and returns the parsed JSON as a dictionary, and assigning that return to serverConfig makes those values available to the rest of the startup sequence. Those values are then used immediately to configure runtime details such as the deployment directory, logging and server port (you’ll see later where deployDir and port are read and where serverConfig is logged), and they are the authoritative source for helper functions that persist or load runtime state—getTimestampsData and saveTimestampsData use serverConfig to build the timestamps file path, and other subsystems like Instruments.fetchInstrumentsFromServer and TradeManager.registerStrategy also call getServerConfig when they need the same environment paths. In short, the assignment captures the server configuration early so the entrypoint and subsequent components, including the call into BaseStrategy.run, operate with a consistent configuration context.
# file path: src/config/Config.py
import json
import osThe configuration loader brings in json to parse and emit the JSON blobs that represent server settings, broker app settings and the persisted timestamps used across authentication, controller and instruments logic, and it brings in os to interact with the host environment and filesystem — locating config files, expanding paths, creating or checking directories, and reading/writing the timestamp files. Other modules in the codebase follow a similar pattern but add more dependencies: for example, controller or startup code typically imports os and json alongside logging and then pulls in functions from config.Config such as getServerConfig, getTimestampsData and saveTimestampsData to access the data this file provides; orchestrator code imports Controller and Utils to wire components, and model-focused modules import types like Segment and ProductType. The minimal import set here reflects that Config’s responsibility is low-level file and environment handling for the larger system.
# file path: src/config/Config.py
def getServerConfig():
with open(’../config/server.json’, ‘r’) as server:
jsonServerData = json.load(server)
return jsonServerDatagetServerConfig loads and parses the server configuration JSON stored in the project’s config directory and returns that configuration as a Python dictionary. It provides the concrete server settings the process needs at startup—serverConfig is populated by calling getServerConfig and then used throughout the runtime—so values like the deployment directory are made available to other utilities. getTimestampsData and saveTimestampsData both call getServerConfig to derive the path where timestamps.json is read and written, and many other components (for example Instruments.fetchInstrumentsFromServer, TradeManager.registerStrategy, ticker and login-related code) call getServerConfig when they need runtime paths or flags. The function performs simple file I/O and JSON parsing and follows the same straightforward pattern used by getSystemConfig and getBrokerAppConfig for loading their respective JSON blobs.
# file path: src/main.py
deployDir = serverConfig[’deployDir’]With serverConfig already populated by getServerConfig, the entrypoint pulls out the deployDir value and stores it in the local deployDir variable so the rest of the startup sequence has a canonical filesystem location to work with. That extracted deploy directory is used by subsequent initialization steps to locate or create runtime artifacts (deployment-specific config, broker adapters, timestamp files, and any data folders the engine expects) and to resolve paths when the process sets up its environment and logging. This is the same pattern used a few lines later for logFileDir and is immediately echoed for visibility by the run-time print of the deployDir so operators can confirm which deployment path the runtime is using.
# file path: src/main.py
logFileDir = serverConfig[’logFileDir’]After getServerConfig populated serverConfig with the runtime settings, this line reads the configured log file directory from serverConfig and assigns it to the local variable logFileDir so the entrypoint can use that path when initializing logging and any filesystem steps that follow (like creating the directory or pointing log handlers). This follows the same pattern used for other environment values such as deployDir and port, and the value is later echoed to the console by the print statement you already saw.
# file path: src/main.py
print(”Deploy Directory = “ + deployDir)After serverConfig was loaded into serverConfig and deployDir was extracted from it earlier, this line prints a labeled, human-readable echo of the deployDir value to standard output so an operator or startup log can immediately see which deployment directory the runtime will use. In the startup narrative of the entrypoint that bootstraps configuration, environment and logging before handing control to BaseStrategy.run, this print serves as an early, side-effect-free confirmation of a critical path setting — the same simple read-then-echo pattern is used for logFileDir a few lines later (logFileDir is read from serverConfig and then printed), but these prints write to stdout as immediate startup feedback prior to the full logging subsystem being initialized.
# file path: src/main.py
print(”LogFile Directory = “ + logFileDir)After serverConfig was loaded and logFileDir was pulled out of it, the entrypoint emits the logFileDir value to the process console so an operator or log collector can immediately see where runtime logs will be written; this is the same pattern used when the code prints deployDir earlier, and it happens just before initLoggingConfg is invoked with a logfile path composed from that directory, making the printed line a simple startup visibility checkpoint that ties the loaded configuration to the logging initialization that follows.
# file path: src/main.py
initLoggingConfg(logFileDir + “/app.log”)After serverConfig is read and the logFileDir value is printed, the call to initLoggingConfg with the path built from logFileDir and the app.log filename establishes the process-wide logging sink that the rest of the runtime will use. initLoggingConfg configures Python’s root logging so that messages are written to that file, formats each entry to include a timestamp followed by the textual message, and sets the severity threshold to INFO with a human-readable date format. The function itself returns nothing; its purpose is to ensure that subsequent logging calls such as the logging.info call that follows will end up persisted in the configured log file under the runtime’s log directory. In the context of the algo-trade framework, this guarantees that broker adapters, market data ingestion, strategy execution and the order manager produce a consolidated, timestamped audit trail for debugging, monitoring and post-trade analysis.
# file path: src/main.py
def initLoggingConfg(filepath):
format = “%(asctime)s: %(message)s”
logging.basicConfig(filename=filepath, format=format, level=logging.INFO, datefmt=”%Y-%m-%d %H:%M:%S”)initLoggingConfg is a small bootstrapping helper that centralizes how the process emits runtime logs: when the entrypoint hands it the path to the application log file it configures Python’s logging system to write messages to that file, to include a standardized timestamp and the message body, and to filter output at the INFO level. Its effect is to make all subsequent logging calls—such as the earlier-recorded serverConfig and brokerAppConfig messages—land in a single, timestamped logfile with a consistent date format, so runtime events, authentication steps, strategy lifecycle messages and order activity are persisted for troubleshooting and auditing. Conceptually it is a thin wrapper around the standard logging configuration API that sets filename, message format, date format and log level so the rest of the framework can simply use logging.info/debug/error and rely on a consistent, file-backed logging destination.
# file path: src/main.py
logging.info(’serverConfig => %s’, serverConfig)As the application entrypoint initializes the runtime, the call to logging.info emits the contents of serverConfig into the process logs at INFO level so operators and developers can see exactly which server settings were loaded by getServerConfig. This gives immediate startup visibility — confirming environment flags, directory paths, log and persistence settings, and any other server-level options before those values are used to drive behavior. Because the route registrations you already saw depend on the runtime configuration and the port value is later read from serverConfig, having a recorded snapshot of serverConfig makes it straightforward to correlate what the service intended to use versus what it actually bound to. The line plays the same role as the similar logging of brokerAppConfig: both produce human-readable checkpoints of configuration state, while the later assignment of port consumes one of the values from the already-logged serverConfig.
# file path: src/main.py
brokerAppConfig = getBrokerAppConfig()After the earlier step that loaded serverConfig, the entrypoint calls getBrokerAppConfig to populate brokerAppConfig with the broker-specific application settings the runtime needs to initialize connectivity and authentication with an exchange adaptor. getBrokerAppConfig performs file I/O to read and parse the brokerapp.json configuration file and returns the resulting JSON object; the returned dictionary is then stored in brokerAppConfig, logged immediately afterward, and later consumed by the Controller helpers (including getBrokerName and the broker login handlers) as well as the broker adaptor initialization code so the strategy and execution layers start with the correct broker credentials, endpoints, and app-level flags. This follows the same pattern used earlier when the code loaded serverConfig: centralized JSON config is read at startup and kept in a single variable for the rest of the runtime to reference.
# file path: src/config/Config.py
def getBrokerAppConfig():
with open(’../config/brokerapp.json’, ‘r’) as brokerapp:
jsonUserData = json.load(brokerapp)
return jsonUserDatagetBrokerAppConfig is the small startup helper that reads the broker application settings from the project’s broker configuration file and returns them as a parsed JSON object for the rest of the runtime to consume. During initialization the entrypoint calls getBrokerAppConfig and stores its result into brokerAppConfig so the Controller and authentication flow have a single source of truth for broker name, client ID, app key, app secret and redirect information; BrokerAppDetails uses those values to populate its instance fields and the broker-specific login implementation such as ZerodhaLogin consumes them to perform the actual login handshake. The function performs simple file I/O: it opens the broker config file, parses the JSON, and hands the resulting dictionary back to the caller, following the same pattern used by getServerConfig for server settings.
# file path: src/main.py
logging.info(’brokerAppConfig => %s’, brokerAppConfig)At this point in the entrypoint, after initLoggingConfg has established the process-wide logging sink and after getBrokerAppConfig populated brokerAppConfig with the broker-specific settings, the logging.info call writes the contents of brokerAppConfig into the application log at INFO level so operators and developers can see exactly which broker application settings the runtime will use for connectivity and authentication; this mirrors the earlier serverConfig emission and provides the same startup traceability for broker-side configuration before the strategy is started.
# file path: src/main.py
port = serverConfig[’port’]The entrypoint extracts the port setting from serverConfig and stores it in the local variable named port so the startup sequence knows which TCP port the runtime should use when it brings up its network-facing components (for example the management/API endpoint or any telemetry socket the engine binds). This is the same configuration-read pattern you saw earlier where getServerConfig produced serverConfig and the logFileDir value was pulled out; here the code mirrors that approach by pulling another runtime parameter from the parsed server configuration so subsequent initialization steps can pass a concrete port value into the server/engine that will be started.
# file path: src/main.py
app.run(’localhost’, port)The app.run invocation starts Flask’s built-in HTTP server bound to the loopback interface using the port value that was read into port from serverConfig. Because app was instantiated earlier with Flask and its debug flag set, calling app.run hands control to Flask’s request loop so any routes and view functions registered on the Flask application become live and begin handling incoming HTTP requests. In the context of the startup sequence that already initialized logging with initLoggingConfg and loaded broker settings with getBrokerAppConfig, app.run makes the runtime’s HTTP control and observability surface reachable locally so operators, the dashboard, health checks, or other local tooling can query runtime state, positions, and control endpoints while the strategy and execution components are running. The call therefore both opens the local network endpoint for the service and keeps the process running so BaseStrategy.run and the rest of the trading runtime remain active and reachable.
# file path: src/main.py
if os.path.exists(deployDir) == False:
print(”Deploy Directory “ + deployDir + “ does not exist. Exiting the app.”)
exit(-1)After deployDir is pulled out of serverConfig, the entrypoint verifies that the path actually exists on disk; if it does not, the process prints a human-readable message naming the missing deployDir and then terminates with a non-zero exit status so the startup stops immediately rather than proceeding into a partially-initialized runtime. This is an early startup validation that protects later steps which expect deployed artifacts and configuration to live under deployDir. Because logging is configured later via initLoggingConfg, the check uses a plain print to surface the problem to the operator, and the immediate exit ensures supervisors or CI see a failure instead of the app continuing in an invalid state. This differs from the nearby lines that simply assign or print deployDir and logFileDir: those are informational or configuration reads, whereas this is an explicit guard that enforces a required runtime precondition.
# file path: src/main.py
if os.path.exists(logFileDir) == False:
print(”LogFile Directory “ + logFileDir + “ does not exist. Exiting the app.”)
exit(-1)The entrypoint verifies that the filesystem path held in logFileDir actually exists by invoking os.path.exists and, if the directory is missing, emits a plain stdout message and terminates the process with a non-zero exit. This is a fail-fast precondition that runs before initLoggingConfg is called so the runtime never attempts to create or open the application log file inside a non-existent directory; because Python’s logging system has not been configured yet the code uses print for the immediate error message. Compared to the earlier line that simply echoed the logFileDir value and the earlier assignment that populated logFileDir from serverConfig, this check enforces that the logging sink can be created and forces the application to stop cleanly when that prerequisite is absent.
# file path: src/config/Config.py
def getSystemConfig():
with open(’../config/system.json’, ‘r’) as system:
jsonSystemData = json.load(system)
return jsonSystemDatagetSystemConfig is the small synchronous helper in the centralized configuration loader that opens the system-level JSON configuration, parses it, and returns the resulting Python dictionary for callers to consume. Its purpose is to expose runtime-level
# file path: src/config/Config.py
def getHolidays():
with open(’../config/holidays.json’, ‘r’) as holidays:
holidaysData = json.load(holidays)
return holidaysDataAs we saw earlier when getServerConfig and getBrokerAppConfig load startup settings and boot the logging sink, getHolidays plays the equivalent role for calendar data: it performs a simple file I/O read of the project’s holidays configuration and returns the parsed JSON object so the rest of the runtime can consult official market holidays. Utilities in Utils — for example the functions that generate expiry symbols and the holiday-checking helpers used by strategies and the scheduler — call getHolidays to determine whether a given date is a holiday, to skip trading days, and to drive expiry calculations. Like getServerConfig and getSystemConfig, getHolidays follows the same lightweight pattern of reading a configuration file and returning the deserialized data; unlike getTimestampsData it does not perform deploy-directory resolution or missing-file handling, it simply reads and returns the holidays payload for callers to use.
# file path: src/config/Config.py
def getTimestampsData():
serverConfig = getServerConfig()
timestampsFilePath = os.path.join(serverConfig[’deployDir’], ‘timestamps.json’)
if os.path.exists(timestampsFilePath) == False:
return {}
timestampsFile = open(timestampsFilePath, ‘r’)
timestamps = json.loads(timestampsFile.read())
return timestampsgetTimestampsData uses the already-familiar getServerConfig to locate the runtime’s deploy directory, then looks for a timestamps.json file inside that deployDir; if the file is missing it returns an empty dictionary, and if the file exists it opens and parses the JSON and returns the resulting Python dictionary. Conceptually this is the read side of the timestamps persistence pair that complements saveTimestampsData: startup components like Instruments.fetchInstrumentsFromServer and TradeManager.registerStrategy call getTimestampsData to learn the last-saved timestamps so they can decide whether to fetch fresh instrument data or resume work without duplicating work. The function performs a simple existence check before doing file I/O so callers get a safe empty state when no persisted timestamps are present, and it relies on getServerConfig to centralize where deployment artifacts (including timestamps.json) live.
# file path: src/config/Config.py
def saveTimestampsData(timestamps = {}):
serverConfig = getServerConfig()
timestampsFilePath = os.path.join(serverConfig[’deployDir’], ‘timestamps.json’)
with open(timestampsFilePath, ‘w’) as timestampsFile:
json.dump(timestamps, timestampsFile, indent=2)
print(”saved timestamps data to file “ + timestampsFilePath)saveTimestampsData accepts a timestamps dictionary and persists that runtime metadata as a JSON file inside the deployment directory defined by the server configuration. It first obtains serverConfig by calling getServerConfig, uses the deployDir value from that configuration to form the path for a timestamps.json file, opens that file for writing and writes the supplied timestamps object as pretty-printed JSON, and then prints a short console message confirming where the file was written. Conceptually this is the write-side companion to getTimestampsData: components like Instruments.fetchInstrumentsFromServer and TradeManager.registerStrategy update or create timestamp entries at runtime and call saveTimestampsData so authentication state, last-instrument-fetch times, and other cross-module timestamps survive process restarts and can be read back by authentication, controller, and instrument-loading logic. The observable side effects are file I/O to the deployDir and a console print for operator feedback.
# file path: src/strategies/BaseStrategy.py
import logging
import time
from datetime import datetime
from models.ProductType import ProductType
from core.Quotes import Quotes
from trademgmt.TradeManager import TradeManager
from utils.Utils import UtilsBaseStrategy pulls together a small set of runtime, domain and helper modules so derived strategies can read market data, classify orders, interact with the execution layer, and emit lifecycle diagnostics. The logging module provides the process-wide logger that was configured earlier during startup so strategy events, decisions and errors are written to the established application log. time and datetime supply the lightweight timing and timestamp utilities the strategy uses for scheduling, measuring durations and tagging events as the strategy lifecycle advances. ProductType is the domain model enum that lets BaseStrategy label orders and positions with the correct product classification (for example differentiating intraday versus delivery), which the TradeManager and broker adapters rely on. Quotes is the normalized market data accessor that BaseStrategy uses to fetch the current price/quote view for instruments so strategy logic can make decisions without dealing with broker-specific payloads. TradeManager is the trade lifecycle and execution coordinator that BaseStrategy uses to place, modify, cancel and query orders and to reconcile trade state with the strategy’s internal bookkeeping. Utils supplies shared helper routines (formatting, calculations, small conversions) that keep repeated utility logic out of strategy code. This import set mirrors the common pattern you’ve seen elsewhere in the codebase: most strategy-adjacent modules import logging and datetime for observability, ProductType and trade-management primitives for execution semantics, and a small shared Utils library, while other files add Direction or Instruments when they need order polarity or instrument lookup functionality.
# file path: src/strategies/BaseStrategy.py
class BaseStrategy:
def __init__(self, name):
self.name = name
self.enabled = True
self.productType = ProductType.MIS
self.symbols = []
self.slPercentage = 0
self.targetPercentage = 0
self.startTimestamp = Utils.getMarketStartTime()
self.stopTimestamp = None
self.squareOffTimestamp = None
self.capital = 10000
self.leverage = 1
self.maxTradesPerDay = 1
self.isFnO = False
self.capitalPerSet = 0
TradeManager.registerStrategy(self)
self.trades = TradeManager.getAllTradesByStrategy(self.name)
def getName(self):
return self.name
def isEnabled(self):
return self.enabled
def setDisabled(self):
self.enabled = False
def process(self):
logging.info(”BaseStrategy process is called.”)
pass
def calculateCapitalPerTrade(self):
leverage = self.leverage if self.leverage > 0 else 1
capitalPerTrade = int(self.capital * leverage / self.maxTradesPerDay)
return capitalPerTrade
def calculateLotsPerTrade(self):
if self.isFnO == False:
return 0
return int(self.capital / self.capitalPerSet)
def canTradeToday(self):
return True
def run(self):
if self.enabled == False:
logging.warn(”%s: Not going to run strategy as its not enabled.”, self.getName())
return
if Utils.isMarketClosedForTheDay():
logging.warn(”%s: Not going to run strategy as market is closed.”, self.getName())
return
now = datetime.now()
if now < Utils.getMarketStartTime():
Utils.waitTillMarketOpens(self.getName())
if self.canTradeToday() == False:
logging.warn(”%s: Not going to run strategy as it cannot be traded today.”, self.getName())
return
now = datetime.now()
if now < self.startTimestamp:
waitSeconds = Utils.getEpoch(self.startTimestamp) - Utils.getEpoch(now)
logging.info(”%s: Waiting for %d seconds till startegy start timestamp reaches...”, self.getName(), waitSeconds)
if waitSeconds > 0:
time.sleep(waitSeconds)
while True:
if Utils.isMarketClosedForTheDay():
logging.warn(”%s: Exiting the strategy as market closed.”, self.getName())
breakBaseStrategy defines the common lifecycle, defaults and small helpers that every concrete strategy builds on, and it wires each strategy instance into TradeManager so the engine can track and persist strategy-specific trades. On construction BaseStrategy captures a name and sets a set of sensible defaults for enablement, product type, symbols list, stop/target percentages, start/stop/square-off timestamps, capital, leverage, max trades per day, F&O flags and capital-per-set; it then registers the instance with TradeManager.registerStrategy and populates its trades list by calling TradeManager.getAllTradesByStrategy so any previously persisted intraday trades are available to the strategy. It exposes simple identity and enablement accessors (getName, isEnabled, setDisabled) and a process placeholder that logs invocation and is intended to be overridden by derived classes such as BNFORB30Min, OptionSelling and SampleStrategy. Two utility calculators are provided: calculateCapitalPerTrade computes an integer allocation per trade using leverage and the configured maxTradesPerDay, and calculateLotsPerTrade returns zero for non-FnO strategies or derives lot counts from capital and capitalPerSet for FnO. canTradeToday returns true by default so strategies can override business-day logic. The run method orchestrates the strategy’s runtime gating: it skips execution when disabled, consults market state via Utils.isMarketClosedForTheDay and uses Utils.waitTillMarketOpens and Utils.getMarketStartTime to defer until market open, checks canTradeToday, sleeps until the configured startTimestamp using Utils.getEpoch when necessary, and then enters a loop that exits once Utils reports the market is closed. The class therefore centralizes the repeated preconditions and scheduling behavior that concrete strategies reuse while leaving trade generation, placement and per-tick decision methods to the subclasses and to the rest of the TradeManager/Utils ecosystem.
# file path: src/utils/Utils.py
import math
import uuid
import time
import logging
import calendar
from datetime import datetime, timedelta
from config.Config import getHolidays
from models.Direction import Direction
from trademgmt.TradeState import TradeStateThe imports here bring together a small set of standard library utilities and a couple of domain types that Utilities rely on across the framework: math supplies numeric helpers for calculations used in normalization and risk math, uuid is used when utilities need to create unique identifiers for ephemeral objects or test fixtures, and time plus datetime and timedelta provide the runtime timekeeping and arithmetic needed for timestamping, interval calculations and market session logic; calendar is pulled in to assist with weekday/holiday computations that are combined with the project’s holiday data from getHolidays (you’ve already seen getHolidays used elsewhere to load the market calendar). Logging is imported so utility functions can emit diagnostics consistent with the rest of the system. Finally, Direction and TradeState are the domain enums the utilities use to interpret and normalize trade-side semantics and lifecycle state when helpers process trades or build normalized quote/position views. Compared to the similar import lists found in strategy and core modules—which often import ProductType, Quotes, TradeManager or the central Utils class—this file focuses on foundational, low-level helpers (stdlib time/math/uuid/calendar) plus the minimal domain types (Direction, TradeState) and holiday loader needed to keep shared normalization and date logic consistent across strategies and tests.
# file path: src/utils/Utils.py
class Utils:
dateFormat = “%Y-%m-%d”
timeFormat = “%H:%M:%S”
dateTimeFormat = “%Y-%m-%d %H:%M:%S”
@staticmethod
def roundOff(price):
return round(price, 2)
@staticmethod
def roundToNSEPrice(price):
x = round(price, 2) * 20
y = math.ceil(x)
return y / 20
@staticmethod
def isMarketOpen():
if Utils.isTodayHoliday():
return False
now = datetime.now()
marketStartTime = Utils.getMarketStartTime()
marketEndTime = Utils.getMarketEndTime()
return now >= marketStartTime and now <= marketEndTime
@staticmethod
def isMarketClosedForTheDay():
if Utils.isTodayHoliday():
return True
now = datetime.now()
marketEndTime = Utils.getMarketEndTime()
return now > marketEndTime
@staticmethod
def waitTillMarketOpens(context):
nowEpoch = Utils.getEpoch(datetime.now())
marketStartTimeEpoch = Utils.getEpoch(Utils.getMarketStartTime())
waitSeconds = marketStartTimeEpoch - nowEpoch
if waitSeconds > 0:
logging.info(”%s: Waiting for %d seconds till market opens...”, context, waitSeconds)
time.sleep(waitSeconds)
@staticmethod
def getEpoch(datetimeObj = None):
if datetimeObj == None:
datetimeObj = datetime.now()
epochSeconds = datetime.timestamp(datetimeObj)
return int(epochSeconds)
@staticmethod
def getMarketStartTime(dateTimeObj = None):
return Utils.getTimeOfDay(9, 15, 0, dateTimeObj)
@staticmethod
def getMarketEndTime(dateTimeObj = None):
return Utils.getTimeOfDay(15, 30, 0, dateTimeObj)
@staticmethod
def getTimeOfDay(hours, minutes, seconds, dateTimeObj = None):
if dateTimeObj == None:
dateTimeObj = datetime.now()
dateTimeObj = dateTimeObj.replace(hour=hours, minute=minutes, second=seconds, microsecond=0)
return dateTimeObj
@staticmethod
def getTimeOfToDay(hours, minutes, seconds):
return Utils.getTimeOfDay(hours, minutes, seconds, datetime.now())
@staticmethod
def getTodayDateStr():
return Utils.convertToDateStr(datetime.now())
@staticmethodUtils centralizes small, commonly used helpers around numeric rounding and market-time calculations that other mid-level modules and strategies rely on to make time- and price-based decisions. It defines the standard date, time and datetime formats used across the project and provides two rounding helpers: roundOff performs a simple two-decimal currency-style rounding used by P&L and price reporting, and roundToNSEPrice snaps an arbitrary price up to the next valid NSE tick by scaling, applying a ceiling and rescaling (effectively enforcing a 0.05 tick granularity). The market-status helpers drive the runtime behaviour of strategies and managers: isMarketOpen first consults the holiday logic (via the holiday helper in the adjacent Utils part) and then compares the current instant to market window boundaries; isMarketClosedForTheDay flags that the remainder of the day is non-tradable either because of a holiday or because the current time is past the market close. waitTillMarketOpens computes seconds until the next market start and sleeps while emitting an informational log using the provided context string; it relies on getEpoch to convert datetimes to integer epoch seconds. getMarketStartTime and getMarketEndTime are convenience accessors that produce datetimes set to the official market open and close times by delegating to getTimeOfDay, which normalizes any passed datetime (or now) to a specific hour/minute/second with microseconds cleared. getTimeOfToDay is a thin convenience wrapper for constructing those day-specific datetimes for the current day, and getTodayDateStr
# file path: src/utils/Utils.py
def convertToDateStr(datetimeObj):
return datetimeObj.strftime(Utils.dateFormat)
@staticmethod
def isHoliday(datetimeObj):
dayOfWeek = calendar.day_name[datetimeObj.weekday()]
if dayOfWeek == ‘Saturday’ or dayOfWeek == ‘Sunday’:
return True
dateStr = Utils.convertToDateStr(datetimeObj)
holidays = getHolidays()
if (dateStr in holidays):
return True
else:
return False
@staticmethod
def isTodayHoliday():
return Utils.isHoliday(datetime.now())
@staticmethod
def generateTradeID():
return str(uuid.uuid4())
@staticmethod
def calculateTradePnl(trade):
if trade.tradeState == TradeState.ACTIVE:
if trade.cmp > 0:
if trade.direction == Direction.LONG:
trade.pnl = Utils.roundOff(trade.filledQty * (trade.cmp - trade.entry))
else:
trade.pnl = Utils.roundOff(trade.filledQty * (trade.entry - trade.cmp))
else:
if trade.exit > 0:
if trade.direction == Direction.LONG:
trade.pnl = Utils.roundOff(trade.filledQty * (trade.exit - trade.entry))
else:
trade.pnl = Utils.roundOff(trade.filledQty * (trade.entry - trade.exit))
tradeValue = trade.entry * trade.filledQty
if tradeValue > 0:
trade.pnlPercentage = Utils.roundOff(trade.pnl * 100 / tradeValue)
return trade
@staticmethod
def prepareMonthlyExpiryFuturesSymbol(inputSymbol):
expiryDateTime = Utils.getMonthlyExpiryDayDate()
expiryDateMarketEndTime = Utils.getMarketEndTime(expiryDateTime)
now = datetime.now()
if now > expiryDateMarketEndTime:
expiryDateTime = Utils.getMonthlyExpiryDayDate(now + timedelta(days=20))
year2Digits = str(expiryDateTime.year)[2:]
monthShort = calendar.month_name[expiryDateTime.month].upper()[0:3]
futureSymbol = inputSymbol + year2Digits + monthShort + ‘FUT’
logging.info(’prepareMonthlyExpiryFuturesSymbol[%s] = %s’, inputSymbol, futureSymbol)
return futureSymbol
@staticmethod
def prepareWeeklyOptionsSymbol(inputSymbol, strike, optionType, numWeeksPlus = 0):
expiryDateTime = Utils.getWeeklyExpiryDayDate()
todayMarketStartTime = Utils.getMarketStartTime()
expiryDayMarketEndTime = Utils.getMarketEndTime(expiryDateTime)
if numWeeksPlus > 0:
expiryDateTime = expiryDateTime + timedelta(days=numWeeksPlus * 7)
expiryDateTime = Utils.getWeeklyExpiryDayDate(expiryDateTime)
if todayMarketStartTime > expiryDayMarketEndTime:
expiryDateTime = expiryDateTime + timedelta(days=6)
expiryDateTime = Utils.getWeeklyExpiryDayDate(expiryDateTime)convertToDateStr is a simple formatter that turns a Python datetime into the project’s canonical date string using the dateFormat constant defined on Utils in Utils_part1, so downstream code always compares dates in the same textual form. isHoliday performs the project’s holiday logic for any datetime: it first rejects weekends by checking the weekday name, then converts the datetime to the canonical date string via convertToDateStr and consults the holidays list returned by getHolidays (which you already saw reads the holidays.json file); if either condition matches it reports the day as a holiday. isTodayHoliday is just a convenience wrapper that asks isHoliday about the current moment. generateTradeID returns a unique identifier for a new Trade by producing a UUID4 string, which Trade uses when instantiated. calculateTradePnl accepts a Trade object and computes and assigns its running PnL and PnL percentage: when the trade is active it compares the current market price against entry depending on the trade direction, and when the trade has exited it uses the recorded exit price vs entry; numeric results are normalized with Utils.roundOff and the percent is computed against the notional entry value. prepareMonthlyExpiryFuturesSymbol builds a monthly futures symbol for a given underlying by first finding the monthly expiry date via Utils.getMonthlyExpiryDayDate (logic for which lives in Utils_part3), then checking whether the current time is beyond that expiry’s market end and, if so, advancing to the next month before formatting the symbol; it forms the suffix from the two-digit year and the three-letter uppercase month name and appends the futures marker, and emits a log entry with the produced symbol. prepareWeeklyOptionsSymbol begins the weekly-option symbol workflow by resolving the appropriate weekly expiry date using Utils.getWeeklyExpiryDayDate (from Utils_part3), compares market start and expiry market end times to decide whether to roll the expiry forward (and supports an explicit numWeeksPlus offset by shifting weeks and re-resolving the weekly expiry), and then proceeds (in the adjacent code in Utils_part3) to construct the option symbol string in the appropriate monthly-or-day-encoded format; this function therefore orchestrates expiry selection and defers the final symbol-encoding details to the companion logic in Utils_part3.
# file path: src/utils/Utils.py
expiryDateTimeMonthly = Utils.getMonthlyExpiryDayDate()
weekAndMonthExpriySame = False
if expiryDateTime == expiryDateTimeMonthly:
weekAndMonthExpriySame = True
logging.info(’Weekly and Monthly expiry is same for %s’, expiryDateTime)
year2Digits = str(expiryDateTime.year)[2:]
optionSymbol = None
if weekAndMonthExpriySame == True:
monthShort = calendar.month_name[expiryDateTime.month].upper()[0:3]
optionSymbol = inputSymbol + str(year2Digits) + monthShort + str(strike) + optionType.upper()
else:
m = expiryDateTime.month
d = expiryDateTime.day
mStr = str(m)
if m == 10:
mStr = “O”
elif m == 11:
mStr = “N”
elif m == 12:
mStr = “D”
dStr = (”0” + str(d)) if d < 10 else str(d)
optionSymbol = inputSymbol + str(year2Digits) + mStr + dStr + str(strike) + optionType.upper()
logging.info(’prepareWeeklyOptionsSymbol[%s, %d, %s, %d] = %s’, inputSymbol, strike, optionType, numWeeksPlus, optionSymbol)
return optionSymbol
@staticmethod
def getMonthlyExpiryDayDate(datetimeObj = None):
if datetimeObj == None:
datetimeObj = datetime.now()
year = datetimeObj.year
month = datetimeObj.month
lastDay = calendar.monthrange(year, month)[1]
datetimeExpiryDay = datetime(year, month, lastDay)
while calendar.day_name[datetimeExpiryDay.weekday()] != ‘Thursday’:
datetimeExpiryDay = datetimeExpiryDay - timedelta(days=1)
while Utils.isHoliday(datetimeExpiryDay) == True:
datetimeExpiryDay = datetimeExpiryDay - timedelta(days=1)
datetimeExpiryDay = Utils.getTimeOfDay(0, 0, 0, datetimeExpiryDay)
return datetimeExpiryDay
@staticmethod
def getWeeklyExpiryDayDate(dateTimeObj = None):
if dateTimeObj == None:
dateTimeObj = datetime.now()
daysToAdd = 0
if dateTimeObj.weekday() >= 3:
daysToAdd = -1 * (dateTimeObj.weekday() - 3)
else:
daysToAdd = 3 - dateTimeObj.weekday()
datetimeExpiryDay = dateTimeObj + timedelta(days=daysToAdd)
while Utils.isHoliday(datetimeExpiryDay) == True:
datetimeExpiryDay = datetimeExpiryDay - timedelta(days=1)
datetimeExpiryDay = Utils.getTimeOfDay(0, 0, 0, datetimeExpiryDay)
return datetimeExpiryDay
@staticmethod
def isTodayWeeklyExpiryDay():
expiryDate = Utils.getWeeklyExpiryDayDate()
todayDate = Utils.getTimeOfToDay(0, 0, 0)
if expiryDate == todayDate:
return True
return False
@staticmethodUtils_part3 provides the holiday-aware expiry date calculations and the branching logic used to build option symbols that other strategy modules rely on when they need to reference weekly or monthly expiries. The prepareWeeklyOptionsSymbol logic first asks Utils.getMonthlyExpiryDayDate for the month’s expiry and compares it to the weekly expiry it was already computing; if the weekly and monthly expiries coincide it formats the options trading symbol using the two-digit year plus a three-letter uppercase month abbreviation, strike and option side, otherwise it formats the symbol using a compact encoding (
# file path: src/trademgmt/TradeManager.py
import os
import logging
import time
import json
from datetime import datetime
from config.Config import getServerConfig
from core.Controller import Controller
from ticker.ZerodhaTicker import ZerodhaTicker
from trademgmt.Trade import Trade
from trademgmt.TradeState import TradeState
from trademgmt.TradeExitReason import TradeExitReason
from trademgmt.TradeEncoder import TradeEncoder
from ordermgmt.ZerodhaOrderManager import ZerodhaOrderManager
from ordermgmt.OrderInputParams import OrderInputParams
from ordermgmt.OrderModifyParams import OrderModifyParams
from ordermgmt.Order import Order
from models.OrderType import OrderType
from models.OrderStatus import OrderStatus
from models.Direction import Direction
from utils.Utils import UtilsTradeManager pulls together a small set of runtime, domain and broker-specific building blocks so it can translate strategy signals into executable orders and durable trade records. The standard Python imports os, logging, time, json and datetime provide filesystem access, runtime diagnostics, simple timing, JSON serialization and timestamping needed for persisting trade state and measuring lifecycle events. getServerConfig is used to locate runtime configuration and deploy paths (recall getServerConfig from earlier), while Controller gives access to the core runtime context that TradeManager uses to interact with other engine components. ZerodhaTicker and ZerodhaOrderManager are the broker-specific adapters for market data and execution that TradeManager will call into when it needs live quotes or to place/modify/cancel orders. The trade domain is represented by Trade, TradeState, TradeExitReason and TradeEncoder so TradeManager can create trade objects, record state transitions and serialize them for storage or messaging. OrderInputParams and OrderModifyParams are the parameter wrappers TradeManager constructs when building new orders or issuing modifications, and Order along with OrderType, OrderStatus and Direction are the shared order-model enums and structures used throughout the engine to track the lifecycle of individual execution requests. Finally, Utils supplies common helpers (formatting, rounding, time helpers) that TradeManager uses for small transformations. This import set follows the same project pattern seen elsewhere—reusing logging, order models and Utils across modules—but differs from the other import blocks by combining configuration and controller access with the trade lifecycle classes and broker-specific order/ticker adapters, reflecting TradeManager’s role as the glue between strategy signals, broker execution and persistent trade bookkeeping.
# file path: src/trademgmt/TradeManager.py
class TradeManager:
ticker = None
trades = []
strategyToInstanceMap = {}
symbolToCMPMap = {}
intradayTradesDir = None
registeredSymbols = []
@staticmethod
def run():
if Utils.isTodayHoliday():
logging.info(”Cannot start TradeManager as Today is Trading Holiday.”)
return
if Utils.isMarketClosedForTheDay():
logging.info(”Cannot start TradeManager as Market is closed for the day.”)
return
Utils.waitTillMarketOpens(”TradeManager”)
serverConfig = getServerConfig()
tradesDir = os.path.join(serverConfig[’deployDir’], ‘trades’)
TradeManager.intradayTradesDir = os.path.join(tradesDir, Utils.getTodayDateStr())
if os.path.exists(TradeManager.intradayTradesDir) == False:
logging.info(’TradeManager: Intraday Trades Directory %s does not exist. Hence going to create.’, TradeManager.intradayTradesDir)
os.makedirs(TradeManager.intradayTradesDir)
brokerName = Controller.getBrokerName()
if brokerName == “zerodha”:
TradeManager.ticker = ZerodhaTicker()
TradeManager.ticker.startTicker()
TradeManager.ticker.registerListener(TradeManager.tickerListener)
time.sleep(2)
TradeManager.loadAllTradesFromFile()
while True:
if Utils.isMarketClosedForTheDay():
logging.info(’TradeManager: Stopping TradeManager as market closed.’)
break
try:
TradeManager.fetchAndUpdateAllTradeOrders()
TradeManager.trackAndUpdateAllTrades()
except Exception as e:
logging.exception(”Exception in TradeManager Main thread”)
TradeManager.saveAllTradesToFile()
time.sleep(30)
logging.info(’TradeManager: Main thread woke up..’)
@staticmethod
def registerStrategy(strategyInstance):
TradeManager.strategyToInstanceMap[strategyInstance.getName()] = strategyInstance
@staticmethod
def loadAllTradesFromFile():
tradesFilepath = os.path.join(TradeManager.intradayTradesDir, ‘trades.json’)
if os.path.exists(tradesFilepath) == False:
logging.warn(’TradeManager: loadAllTradesFromFile() Trades Filepath %s does not exist’, tradesFilepath)
return
TradeManager.trades = []
tFile = open(tradesFilepath, ‘r’)
tradesData = json.loads(tFile.read())
for tr in tradesData:
trade = TradeManager.convertJSONToTrade(tr)
logging.info(’loadAllTradesFromFile trade => %s’, trade)
TradeManager.trades.append(trade)
if trade.tradingSymbol not in TradeManager.registeredSymbols:
TradeManager.ticker.registerSymbols([trade.tradingSymbol])
TradeManager.registeredSymbols.append(trade.tradingSymbol)TradeManager implements the lifecycle controller that glues strategies to execution: the TradeManager class holds runtime state (a ticker instance, the in-memory list of Trade objects, a map from strategy name to strategy instance, a symbol-to-current-market-price map, the intraday trades directory path, and a list of symbols already registered with the market feed). The run method is the orchestrator that first consults Utils to skip startup on market holidays or a closed market and then waits until market open when needed; it then loads the server configuration via getServerConfig to derive a deploy-level trades directory and builds an intraday folder named for today, creating it if absent. After discovering the broker via Controller.getBrokerName it instantiates the appropriate ticker adaptor (the code uses ZerodhaTicker for the zerodha broker), starts the ticker and registers TradeManager.tickerListener as a tick consumer, sleeps briefly to let the feed stabilize, and then calls loadAllTradesFromFile to hydrate any persisted trades for today. loadAllTradesFromFile looks for a trades.json under the intraday directory, logs and returns if missing, otherwise clears the in-memory trades list, parses the JSON, converts each JSON object into a Trade using convertJSONToTrade, appends them to TradeManager.trades, logs each load and registers the trade’s tradingSymbol with the ticker if that symbol hasn’t already been registered. After initialization run enters a loop that stops once Utils reports the market closed; each cycle it calls fetchAndUpdateAllTradeOrders and trackAndUpdateAllTrades to reconcile live order state and advance trade state (those behaviors are implemented in other TradeManager parts), catches and logs exceptions from the cycle, persists the current trades via saveAllTradesToFile, then sleeps for a fixed interval before repeating. The registerStrategy helper simply records a strategy instance into strategyToInstanceMap by its BaseStrategy name so incoming ticks and lifecycle actions can route to the correct strategy.
# file path: src/ticker/BaseTicker.py
import logging
from core.Controller import ControllerIt imports Python’s logging so BaseTicker can emit structured runtime diagnostics for tick processing, error conditions, and lifecycle events, and it imports Controller from core.Controller so BaseTicker can delegate decision-making and lifecycle control to a Controller instance rather than embedding that logic itself. In the project’s architecture, that mirrors the separation of concerns: BaseTicker provides the feed, event handling, and lifecycle hooks while Controller encapsulates the trading control logic. Similar import patterns elsewhere show the same split—some modules import Controller alongside domain models like Quote when they need both control logic and data objects, while broker-adaptors and startup code import configuration helpers and broker-specific login classes such as getBrokerAppConfig, BrokerAppDetails, or ZerodhaLogin when they must perform connectivity and authentication. This file keeps its imports minimal because its role is to standardize tick flow and controller integration rather than handle broker-specific or model-level responsibilities.
# file path: src/ticker/BaseTicker.py
class BaseTicker:
def __init__(self, broker):
self.broker = broker
self.brokerLogin = Controller.getBrokerLogin()
self.ticker = None
self.tickListeners = []
def startTicker(self):
pass
def stopTicker(self):
pass
def registerListener(self, listener):
self.tickListeners.append(listener)
def registerSymbols(self, symbols):
pass
def unregisterSymbols(self, symbols):
pass
def onNewTicks(self, ticks):
for tick in ticks:
for listener in self.tickListeners:
try:
listener(tick)
except Exception as e:
logging.error(’BaseTicker: Exception from listener callback function. Error => %s’, str(e))
def onConnect(self):
logging.info(’Ticker connection successful.’)
def onDisconnect(self, code, reason):
logging.error(’Ticker got disconnected. code = %d, reason = %s’, code, reason)
def onError(self, code, reason):
logging.error(’Ticker errored out. code = %d, reason = %s’, code, reason)
def onReconnect(self, attemptsCount):
logging.warn(’Ticker reconnecting.. attemptsCount = %d’, attemptsCount)
def onMaxReconnectsAttempt(self):
logging.error(’Ticker max auto reconnects attempted and giving up..’)
def onOrderUpdate(self, data):
passBaseTicker provides the common runtime contract and shared behavior that lets TradeManager, Test and broker-specific tickers plug into a single, predictable tick delivery pipeline so the rest of the algo framework can treat different broker adapters uniformly. On construction BaseTicker records the broker identifier you passed and grabs the shared brokerLogin object from Controller.getBrokerLogin so every ticker instance starts with the same authenticated context; it also initializes an internal tickListeners list that other components register callbacks into. The startTicker, stopTicker, registerSymbols, unregisterSymbols and onOrderUpdate methods are defined as no-ops here because concrete tickers such as ZerodhaTicker implement the actual connection, subscription and order-update behavior; ZerodhaTicker, for example, uses the brokerLogin to obtain app credentials and access tokens and wires its websocket callbacks before calling into BaseTicker’s delivery path. registerListener appends a listener callback to the internal list so consumers like TradeManager.tickerListener or Test.tickerListener can subscribe to live ticks. onNewTicks is the distributor: it accepts a sequence of normalized tick objects and iterates through each tick and each registered listener, invoking the callbacks inside a try/except and logging any listener exceptions so a faulty strategy callback does not collapse the feed. The connection lifecycle hooks onConnect, onDisconnect, onError, onReconnect and onMaxReconnectsAttempt provide consistent logging and a single place for runtime diagnostics; subclasses map their broker-specific websocket events onto these hooks (ZerodhaTicker maps its on_connect/on_close/on_error/on_reconnect/on_noreconnect to these). In short, BaseTicker centralizes listener management, standardized event logging and the handshake to Controller-provided authentication, while delegating transport, symbol-to-token mapping and tick normalization to broker-specific subclasses so the rest of the sdoosa-algo-trade-python-master_cleaned system can consume a uniform tick stream.
# file path: src/core/Controller.py
import logging
from config.Config import getBrokerAppConfig
from models.BrokerAppDetails import BrokerAppDetails
from loginmgmt.ZerodhaLogin import ZerodhaLoginController imports the standard logging facility so it can emit runtime diagnostics and integrate with the project’s logging sink. It pulls getBrokerAppConfig from config.Config; analogous to the getSystemConfig helper you already saw, getBrokerAppConfig is the config accessor that returns the broker-specific runtime settings Controller needs to decide which credentials and adapter to instantiate. BrokerAppDetails is the model that encapsulates that broker application metadata and credential fields so Controller can pass a typed object around the rest of the system. ZerodhaLogin is the Zerodha-specific login/session manager that implements the authentication flow, session tokens and any broker-specific handshake logic. Together these imports let Controller load broker configuration, populate a BrokerAppDetails instance, and drive ZerodhaLogin to obtain an authenticated connection that Controller supplies to Quotes, BaseOrderManager, and Instruments_part1. This follows the common project pattern where modules import logging, a configuration getter and a login/adaptor class; other files use the same pattern but swap in getSystemConfig, KiteConnect or BaseLogin where appropriate, whereas Controller specifically uses getBrokerAppConfig and ZerodhaLogin because it centralizes Zerodha connection setup.
# file path: src/core/Controller.py
class Controller:
brokerLogin = None
brokerName = None
def handleBrokerLogin(args):
brokerAppConfig = getBrokerAppConfig()
brokerAppDetails = BrokerAppDetails(brokerAppConfig[’broker’])
brokerAppDetails.setClientID(brokerAppConfig[’clientID’])
brokerAppDetails.setAppKey(brokerAppConfig[’appKey’])
brokerAppDetails.setAppSecret(brokerAppConfig[’appSecret’])
logging.info(’handleBrokerLogin appKey %s’, brokerAppDetails.appKey)
Controller.brokerName = brokerAppDetails.broker
if Controller.brokerName == ‘zerodha’:
Controller.brokerLogin = ZerodhaLogin(brokerAppDetails)
redirectUrl = Controller.brokerLogin.login(args)
return redirectUrl
def getBrokerLogin():
return Controller.brokerLogin
def getBrokerName():
return Controller.brokerNameController is the single orchestrator that centralizes broker configuration and authentication so the rest of the framework can obtain a single, shared broker session and its credentials. When an external login flow is triggered (for example by BrokerLoginAPI handing request arguments to the controller), handleBrokerLogin reads the broker application configuration from disk, creates a BrokerAppDetails instance and populates it with the configured client identifier, application key and secret, and records the broker name on a class-level attribute. It then selects and instantiates the broker-specific login implementation based on that broker name (for example, ZerodhaLogin when the broker is zerodha) and delegates the authentication flow to that login object; the broker login implementation performs the provider-specific session exchange, sets the broker handle and access token on the BaseLogin-derived instance, and returns a redirect URL which handleBrokerLogin forwards back to the caller. Controller exposes two simple accessors, getBrokerLogin and getBrokerName, that return the stored broker-login instance and the broker identifier respectively; other parts of the system call those accessors (BaseOrderManager to obtain the broker handle for order placement, BaseTicker to attach market feeds, API endpoints for holdings and positions, Instruments for instrument fetches, and TradeManager/strategies for runtime needs) so a single authenticated session and configuration are reused across the application. The design uses class-level brokerLogin and brokerName as a singleton-like shared state so the authentication result and broker details become the single source of truth for the rest of the modules.
# file path: src/models/BrokerAppDetails.py
class BrokerAppDetails:
def __init__(self, broker):
self.broker = broker
self.appKey = None
self.appSecret = None
def setClientID(self, clientID):
self.clientID = clientID
def setAppKey(self, appKey):
self.appKey = appKey
def setAppSecret(self, appSecret):
self.appSecret = appSecretBrokerAppDetails is a small value-object that holds a broker identifier and the credentials the Controller needs to construct an authenticator and broker session. Controller.handleBrokerLogin loads the broker configuration via getBrokerAppConfig and then constructs a BrokerAppDetails with the broker name; it then calls setClientID, setAppKey and setAppSecret to populate the instance. Those attributes — broker, clientID, appKey and appSecret — are the pieces ZerodhaLogin (which extends BaseLogin) will read through the BaseLogin pathway when establishing a broker handle and access token, and they are what other parts of the system rely on when the Controller supplies broker-specific parameters for market data, order execution and trade lifecycle management. The methods on BrokerAppDetails are simple setters that write those instance attributes; its role is purely to encapsulate and carry the broker app configuration from the Controller into the login and broker-adapter layers.



