VHDL/GHDL Binary 32-bit Write Overflow When High Bit Set

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP



VHDL/GHDL Binary 32-bit Write Overflow When High Bit Set



I have a VHDL testbench where I would like to write 32-bit binary words to a file for testing. Below is a minimal, complete, verifiable example.



When executed with GHDL (commands below) an overflow is generated at the indicated line. If the line is commented out execution completes successfully and writes the file. The overflow occurs anytime the high bit is set.


library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use std.textio.all;
use std.env.stop;

entity u32_file_write is
end entity;

architecture rtl of u32_file_write is
type intFileType is file of natural;
file fh : intFileType;
begin
run: process
variable no_high_bit : std_logic_vector(31 downto 0) := x"7FFFFFFF";
variable with_high_bit : std_logic_vector(31 downto 0) := x"FFFFFFFF";
begin
file_open(fh, "out.bin", write_mode);
write(fh, to_integer(unsigned(no_high_bit)));
write(fh, to_integer(unsigned(with_high_bit))); -- Overflow here.
file_close(fh);
stop;
end process;
end architecture;



I run the following GHDL commands to run the VHDL code (save as u32_file_write.vhd):


u32_file_write.vhd


ghdl -a -fexplicit --std=08 --ieee=synopsys u32_file_write.vhd
ghdl -e -fexplicit --std=08 --ieee=synopsys u32_file_write
ghdl -r -fexplicit --std=08 --ieee=synopsys u32_file_write



With the line commented out, the corrected results are written to the file:


% od -tx4 out.bin
0000000 7fffffff



If the line is uncommented, an overflow is generated:


ghdl:error: overflow detected
from: ieee.numeric_std.to_integer at numeric_std-body.vhdl:3040
ghdl:error: simulation failed



As noted above, the write will work with any value in the first 31-bits. The write will overflow with any value where the 32-bit is set.



The underlying problem is integer'high is 2^31-1. See:



The accepted answer here states to use an intermediate 'text' format a text processing language. Another answer shows a solution for reading using 'pos but that doesn't help me write.



Is there a simple rework/workaround that will allow me to write all 32-bits of data to a binary file?





Why don't you simply write your data as plain std_logic_vector?
– Renaud Pacalet
Aug 12 at 9:12


std_logic_vector





Removing the to_integer() conversion generates the following error during analysis u32_file_write.vhd:21:14:error: cannot resolve overloading for subprogram call. I can write() a std_logic_vector to a text buffer and then writeline() the text buffer to a file, but the file only contains 0x30 and 0x31. ASCII for '0' and '1'.
– esorton
Aug 12 at 12:02


to_integer()


u32_file_write.vhd:21:14:error: cannot resolve overloading for subprogram call


write()


std_logic_vector


writeline()


0x30


0x31





The reason for supporting writing a binary format would be to allow the data to be examined by a separated tool programmatically understanding unsigned 32 bit (or larger) integer values.
– user1155120
Aug 13 at 4:19





@esorton: sorry, I should have given more information. You can use the VHDL2008 standard procedures to write std_ulogic_vector: ieee.std_logic_1164.hwrite, ieee.std_logic_1164.owrite, ieee.std_logic_1164.write. Note that they are also aliased in ieee.std_logic_textio with the same names. If you prefer writing unsigned, use the ieee.numeric_std equivalents. See my answer for an example.
– Renaud Pacalet
Aug 13 at 8:04


std_ulogic_vector


ieee.std_logic_1164.hwrite


ieee.std_logic_1164.owrite


ieee.std_logic_1164.write


ieee.std_logic_textio


unsigned


ieee.numeric_std




2 Answers
2



Change your file type to character. Convert 8 bits at a time to character and write all four characters to the file,



With 8 bit writes you're responsible for getting the endian order correct.



You can do that with a write procedure tailored to write 32 bit unsigned values to the character file:


library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use std.textio.all;
use std.env.stop;

entity u32_file_write is
end entity;

architecture foo of u32_file_write is
-- type intFileType is file of natural;
type intFileType is file of character; -- CHANGED type mark
file fh : intFileType;
procedure write (file cf: intFileType; val: unsigned (31 downto 0)) is
begin
write (cf, character'val(to_integer(val( 7 downto 0))));
write (cf, character'val(to_integer(val(15 downto 8))));
write (cf, character'val(to_integer(val(23 downto 16))));
write (cf, character'val(to_integer(val(31 downto 24))));
end procedure;
begin
run: process
variable no_high_bit : std_logic_vector(31 downto 0) := x"7FFFFFFF";
variable with_high_bit : std_logic_vector(31 downto 0) := x"FFFFFFFF";

begin
file_open(fh, "out.bin", write_mode);
-- write(fh, to_integer(unsigned(no_high_bit)));
-- write(fh, to_integer(unsigned(with_high_bit))); -- Overflow here.
write (fh, unsigned(no_high_bit));
write (fh, unsigned(with_high_bit));
file_close(fh);
stop;
end process;
end architecture;


ghdl -a -fexplicit --std=08 --ieee=synopsys u32_file_write.vhdl
ghdl -e -fexplicit --std=08 --ieee=synopsys u32_file_write
ghdl -r -fexplicit --std=08 --ieee=synopsys u32_file_write



Note that the only command line argument besides the command (-a, -e, -r) required here is --std-08 because of stop. There are no Synopsys package dependencies nor a requirement for -fexplicit (which isn't depended on either).


stop


od -tx4 out.bin
0000000 7fffffff ffffffff
0000010



A host's file system contains files that consist of an array of 8 bit characters. It's convention (format) that superimposes the idea of something bigger.



VHDL superimposes type on file transactions, unfortunately there's no way to declare a natural range value greater than 2 ** 31 -1, if your integers were bigger they wouldn't be portable.



The above method treats files as character files allowing superimposition of size of elements of the contents by conventions (here in the host system, if you want to read 32 bit unsigned you'd want to read 4 characters and assemble a 32 bit value in the correct endian order).



An unsigned_int here is a 32 bit unsigned value. Note that the ascending or descending order isn't required to match in a subprogram call because of implicit subtype conversion (formals and actuals elements are associated in left to right order here).



The author of the original post encountered a problem reported in a comment:



The write() above generates an error during analysis:

u32_file_write.vhd:23:14:error: cannot resolve overloading for
subprogram call.



The error repeats four times, once for each write().
I've not found a way to get GHDL to write raw bytes other than an
integer.



Line 23 in the comment appears to correspond to the 18th line above where character 14 is the parameter list of the first write procedure call write[file IntFileType, character] which suggests the declaration for type IntFileType's type definition's type mark hasn't been changed to type character. The signature of the procedure calls would not match those of the implicitly declared write for file type IntFileType, also noting the line numbers don't match.


write[file IntFileType, character]


IntFileType


character


IntFileType



The code has been provided complete in this answer to allow copying in it's entirety from the question, which was done along with naming the design file with a .vhdl suffix and using the command lines above.



The version of ghdl used is a recent build (GHDL 0.36-dev (v0.35-259-g4b16ef4)) built with an AdaCore 2015 GPL gnat (GPL 2015 (20150428-49)) and tested with both the llvm backend code generator (clang+llvm-3.8.0) and mcode code generator both on MacOS (10.11.6, gnat-gpl-2015-x86_64-darwin-bin and clang+llvm-3.8.0-x86_64-apple-darwin, using Xcode 8.2.1).



(The endian order of the character writes from val in the new write procedure has been reversed to match the OP's od -xt out.bin result byte order )


od -xt out.bin





The write() above generates an error during analysis: u32_file_write.vhd:23:14:error: cannot resolve overloading for subprogram call. The error repeats four times, once for each write(). I've not found a way to get GHDL to write raw bytes other than an integer.
– esorton
Aug 12 at 11:49


u32_file_write.vhd:23:14:error: cannot resolve overloading for subprogram call





Thank you. You were correct; I neglected to change the file type during my test.
– esorton
Aug 13 at 2:25



If you don't absolutely need the output to be binary you could use the VHDL2008 standard procedures to directly write your vectors, without conversion:


$ ghdl --version
GHDL 0.36-dev (v0.35-259-g4b16ef4c-dirty) [Dunoon edition]
Compiled with GNAT Version: GPL 2017 (20170515-63)
llvm code generator
Written by Tristan Gingold.

Copyright (C) 2003 - 2015 Tristan Gingold.
GHDL is free software, covered by the GNU General Public License. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ cat u32_file_write.vhd
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use std.textio.all;
use std.env.stop;

entity u32_file_write is
end entity;

architecture rtl of u32_file_write is
begin
run: process
variable no_high_bit : std_logic_vector(31 downto 0) := x"7FFFFFFF";
variable with_high_bit : std_logic_vector(31 downto 0) := x"FFFFFFFF";
variable l: line;
file fh: text open write_mode is "out.txt";
begin
hwrite(l, no_high_bit);
writeline(fh, l);
hwrite(l, with_high_bit);
writeline(fh, l);
file_close(fh);
stop;
end process;
end architecture;
$ ghdl -a -fexplicit --std=08 u32_file_write.vhd
$ ghdl -e -fexplicit --std=08 u32_file_write
$ ghdl -r -fexplicit --std=08 u32_file_write
simulation stopped @0ms
$ cat out.txt
7FFFFFFF
FFFFFFFF






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Firebase Auth - with Email and Password - Check user already registered

Dynamically update html content plain JS

How to determine optimal route across keyboard