Random Weight Generator — Probability Weights & Traffic Splits
Random weights are sets of numbers that sum to a target total, typically 1.0 or 100, and represent probability distributions over a set of items or categories. They are used in A/B and multivariate testing to define traffic splits across variants, in machine learning to initialize neural network parameters, in game design to set loot drop rates, and in simulation to assign realistic probabilities to outcomes. This tool generates random weights for up to 20 items using three distribution modes based on the Dirichlet distribution: uniform (all weight allocations equally likely), balanced (weights tend toward even splits), and concentrated (some items receive much more weight than others). You can name each item, choose whether weights sum to 1.0 or 100, and set decimal precision. The output shows a visual bar chart of the distribution alongside a table. Results can be copied as JSON, CSV, or a percentage table.
Frequently Asked Questions
What are random weights used for?
Random weights are used in A/B and multivariate testing to assign traffic splits across test variants, in machine learning for initializing neural network parameters, in statistics for defining probability distributions over categories, in game development for loot drop rates, and in simulation to assign probabilities to different outcomes. Weights that sum to 1.0 form a valid probability distribution over a discrete set of events.
What is the Dirichlet distribution and how does it generate weights?
The Dirichlet distribution is the standard probability distribution over probability vectors. It takes a concentration parameter alpha. At alpha = 1, all distributions are equally likely. At alpha above 1, weights tend to be more evenly distributed. At alpha below 1, weights concentrate heavily on one or two items while others receive near-zero values. This tool implements three presets: balanced (alpha = 5), uniform (alpha = 1), and concentrated (alpha = 0.3) using exponential random variables as an approximation of Gamma(alpha, 1) sampling.
How do I split traffic evenly between test variants?
For a perfectly even split across N variants, each variant gets a weight of 1/N (or 100/N as a percentage). For 4 variants that is 25% each. This tool generates random splits when you want a realistic but non-uniform distribution, which is useful for testing analytics dashboards and simulating real-world traffic patterns where not all variants receive equal traffic.
What is the difference between weights and probabilities?
Probabilities are weights that sum to exactly 1.0. Raw weights can sum to any value and are normalized to produce probabilities. For example, weights [3, 1, 1] mean item A is three times as likely as B or C. Dividing each by the sum of 5 gives probabilities [0.6, 0.2, 0.2]. This tool outputs pre-normalized values that already sum to 1.0 or 100%, so they are ready to use directly as probabilities.
How are weights used in neural network initialization?
Neural network connection weights are initialized with small random values before training so each neuron learns to detect different features. Starting all weights at zero or the same value means all neurons compute identical outputs and receive identical gradient updates, causing them to always stay identical. Random initialization breaks this symmetry. Common strategies are He initialization (uniform random scaled by layer size) and Xavier initialization (normally distributed values).
What is the difference between gross weight, net weight, and tare weight?
Gross weight is the total weight including the product and all its packaging. Net weight is the weight of the product alone, without any packaging. Tare weight is the weight of the packaging itself (gross weight minus net weight). On food packaging, net weight is what must be displayed by law. In shipping, gross weight determines freight charges. Tare weight matters for bulk commodity trading where packaging is excluded from the price.
How do I convert between kilograms, pounds, and ounces?
1 kilogram = 2.20462 pounds. 1 pound = 0.453592 kilograms. 1 pound = 16 ounces. 1 kilogram = 35.274 ounces. 1 ounce = 28.3495 grams. To convert kg to lbs, multiply by 2.20462. To convert lbs to kg, multiply by 0.453592. The metric system uses base-10 increments (grams, kilograms, tonnes) while the imperial system uses historical units with arbitrary conversion factors.
What is a troy ounce and how is it different from a regular ounce?
A troy ounce (oz t) is the standard unit for weighing precious metals (gold, silver, platinum). One troy ounce equals 31.1035 grams. A regular avoirdupois ounce (used for everyday weights) equals 28.3495 grams — about 10% lighter. Gold and silver prices quoted per ounce always mean troy ounces. A troy pound is only 12 troy ounces (373.24g), lighter than a standard pound (453.59g).
How It Works
Random weights are generated using crypto.getRandomValues() and scaled to the requested range. The normalize option divides each value by the total sum so all weights add up to exactly 1.0 (or 100%). The Box-Muller transform produces normally distributed weights when the Gaussian option is selected: z = sqrt(-2 * ln(u1)) * cos(2 * pi * u2), scaled by standard deviation and shifted by mean.
Weighted Average Formula
Weighted average = sum(value_i * weight_i) / sum(weight_i). Example: three exam scores 70, 85, 90 with weights 20%, 30%, 50%: (70*0.2 + 85*0.3 + 90*0.5) / 1.0 = 14 + 25.5 + 45 = 84.5. If weights are not normalized, divide by their total. Weighted averages are used when data points have different reliability, sample size, or importance.
Weight vs Probability
Probabilities must sum to exactly 1.0. Weights can sum to any value and are converted to probabilities by dividing each by the total. Weights [3, 1, 1] produce probabilities [0.60, 0.20, 0.20]. This tool can output normalized weights (summing to 1.0) or raw weights — use normalized when you need to feed values directly into a probability calculation or machine learning algorithm.
When to Use This
Use to generate test probability distributions for a weighted random selection algorithm, to create random importance scores for testing a ranking system, to populate training data for a machine learning experiment, to generate random item weights for a statistics exercise, or to test how a scoring system behaves across a range of weight distributions.
More Free Tools
Robots.txt Viewer
Enter any domain to view its robots.txt and see which pages are blocked from Google.
Video Timestamp Generator
Generate YouTube chapter timestamps, validate against YouTube rules, and get deep-link URLs per chapter.
PDF DPI Checker
Upload a PDF to check the DPI of embedded images and see if the file is ready for professional printing.
I or L Checker
Detect confusable characters like uppercase I, lowercase l, digit 1, and O vs 0 in passwords or codes.