In Domino 4.x, users can specify on-demand Spark clusters to be launched along with their interactive workspace. The Spark worker pods (also called executors) will have some additional overhead beyond that of regular Domino run pods, and this needs to be accounted for when admins are setting up hardware tiers that will be used with on-demand Spark.
Memory overhead for Spark workers:
The overhead defaults are a minimum of 384MiB memory overhead + 10% recommended additional overhead. The memory overhead is configurable by admins using Central Configuration (CC or Central Config) settings:
com.cerebro.domino.workbench.onDemandSpark.worker.memoryOverheadFactor (default: 0.1)
com.cerebro.domino.workbench.onDemandSpark.worker.memoryOverheadMinMiB (default: 384)
CPU overhead for Spark workers:
There should not be any CPU overhead for Spark workers. However, keep in mind that the requested CPU will be rounded up to the next integer.