I have a solution, but it involves building a table with as many rows as the largest number of total payments. It's a task that only has to be done once, but I can't think of anything that allows you to un-aggregate data once it's been aggregated.
The following script performs a cartesian product with the row_seed table to generate the number of rows required. The exact number of rows is determined by the "total_payments" column.
The table agg_data, stores your aggregate data (account number and total payments), as supplied in your original post.
The final select state generates the output as required. The physical order of the rows is only determined by the physical order in agg_data. You'll probably want to order by account number and row id to make sense of the output.
-- Create and populate agg_data with the values from the original post
create table agg_data ( account_number char(9), total_payments int)
insert into agg_data values ('237655643', 3)
insert into agg_data values ('217384950', 2)
insert into agg_data values ('243910349', 4)
-- Create and populate row_seed
create table row_seed ( row_id numeric(10,0))
-- Populate row_seed with 1000 entries. I real life I expect this to be a much larger number.
@count = 1,
@max_rows_required = 1000
while @count <= @max_rows_required
insert row_seed (row_id) select @count
@count = @count + 1
-- This query performs a cartesian product between agg_data and row_seed generating (total_payment) rows for each account number. I've included the optional order by.
r.row_id <= a.total_payments