art-programmer / floorplantransformation Goto Github PK
View Code? Open in Web Editor NEWRaster-to-Vector: Revisiting Floorplan Transformation
Home Page: http://art-programmer.github.io/floorplan-transformation.html
License: MIT License
Raster-to-Vector: Revisiting Floorplan Transformation
Home Page: http://art-programmer.github.io/floorplan-transformation.html
License: MIT License
First of all, very amazing work. And I am trying to test on my own test images.
I installed torch and most of the torch packages successfully. But I found the link of lua--opencv and lunatic-python is out-of-date. I tried to complie the source code, but I failed. I want to know which version of lua are you using? And how did you install lua--opencv, by using luarocks or source code? And which source code did you use specificly?
Here is some code I found:
https://github.com/satoren/luaOpenCV
https://github.com/VisionLabs/torch-opencv
https://github.com/bastibe/lunatic-python
https://github.com/OddSource/lunatic-python
Could someone provide an instruction(lua and packages installation) for running annotator on Ubuntu?
Thank you in advance!
Since we need to show the 3D model for atleast 2-3 floorplan but not able to create floorplan.txt file. please suggest proper steps to do that in python.
Is it possible to have a docker file with this repo? It would facilitie figuring out the dependencies, and allow people to try the code with zero configuration. Would be highly appreciated.
Thanks!
I run into a lot of problem with the library versions of the dependencies that going through them has become a bit painful. Can you please make a list of the dependencies with versions, e.g., opencv 2.4 or 3.x, the head for lua---opencv, etc.
As the pytorch pre-trained model gives alot of prediction errors as compared to lua implementation. Can you please provide evaluated checkpoint for pytorch implementation. and also single image prediction function .
Hello! When I run python train.py --restore=0 I have:
keyname=floorplan task=train started
Traceback (most recent call last):
File "train.py", line 171, in
main(args)
File "train.py", line 25, in main
dataset = FloorplanDataset(options, split='train', random=True)
File "/home/user/FloorplanTransformation-master/pytorch/datasets/floorplan_dataset.py", line 236, in init
with open(self.dataFolder + split + '.txt') as f:
IOError: [Errno 2] No such file or directory: '../data/train.txt'
Tell me please, where can I find this file?
Hello @art-programmer Thank for you share this amazing code.
I wanna ask if it's possiable to disable Gurobi module? Cause I don't have Gurobi License.
Will it increase the computation time?
Converting 2d floorplan to 3D model, using https://github.com/art-programmer/FloorplanTransformation. In rendering folder, while after running viewer.py we get the 3d model. But the floor and ceiling is not visible the function for floor and ceiling in floorplan.py is and output is also attached
def generateFloor(self, data):
floorGroup = EggGroup('floor')
data.addChild(floorGroup)
vp = EggVertexPool('floor_vertex')
floorGroup.addChild(vp)
exteriorWalls = []
for wall in self.walls:
if (wall[4] == 6 or wall[5] == 6):
exteriorWalls.append(copy.deepcopy(wall))
pass
continue
exteriorOpenings = []
for wall in exteriorWalls:
lineDim = calcLineDim((wall[:2], wall[2:4]))
for doorIndex, door in enumerate(self.doors):
if calcLineDim((door[:2], door[2:4])) != lineDim:
continue
if door[lineDim] >= wall[lineDim] and door[2 + lineDim] <= wall[2 + lineDim] and abs(door[1 - lineDim] - wall[1 - lineDim]) <= self.wallWidth:
exteriorOpenings.append(doorIndex)
pass
continue
continue
minDistance = 10000
mainDoorIndex = -1
for icon in self.icons:
if icon[4] == 'entrance':
for doorIndex in exteriorOpenings:
door = self.doors[doorIndex]
distance = pow(pow((door[0] + door[2]) / 2 - (icon[0] + icon[2]) / 2, 2) + pow((door[1] + door[3]) / 2 - (icon[1] + icon[3]) / 2, 2), 0.5)
if distance < minDistance:
minDistance = distance
mainDoorIndex = doorIndex
pass
continue
break
continue
self.startCameraPos = [0.5, -0.5, self.wallHeight * 0.5]
self.startTarget = [0.5, 0.5, self.wallHeight * 0.5]
if mainDoorIndex >= 0:
mainDoor = self.doors[mainDoorIndex]
lineDim = calcLineDim((mainDoor[:2], mainDoor[2:4]))
fixedValue = (mainDoor[1 - lineDim] + mainDoor[3 - lineDim]) / 2
imageSize = [self.width / self.maxDim, self.height / self.maxDim]
side = int(fixedValue < imageSize[1 - lineDim] * 0.5) * 2 - 1
self.startCameraPos[lineDim] = (mainDoor[lineDim] + mainDoor[2 + lineDim]) / 2
self.startTarget[lineDim] = (mainDoor[lineDim] + mainDoor[2 + lineDim]) / 2
self.startCameraPos[1 - lineDim] = fixedValue - 0.5 * side
self.startTarget[1 - lineDim] = fixedValue + 0.5 * side
self.startCameraPos[0] = 1 - self.startCameraPos[0]
self.startTarget[0] = 1 - self.startTarget[0]
pass
newDoors = []
self.windows = []
for doorIndex, door in enumerate(self.doors):
if doorIndex == mainDoorIndex or doorIndex not in exteriorOpenings:
newDoors.append(door)
else:
self.windows.append(door)
pass
continue
self.doors = newDoors
exteriorWallLoops = []
visitedMask = {}
gap = 5.0 / self.maxDim
for wallIndex, wall in enumerate(exteriorWalls):
if wallIndex in visitedMask:
continue
visitedMask[wallIndex] = True
exteriorWallLoop = []
exteriorWallLoop.append(wall)
for loopWall in exteriorWallLoop:
for neighborWallIndex, neighborWall in enumerate(exteriorWalls):
if neighborWallIndex in visitedMask:
continue
#if calcDistance(neighborWall[:2], loopWall[:2]) < gap or calcDistance(neighborWall[2:4], loopWall[:2]) < gap or calcDistance(neighborWall[:2], loopWall[2:4]) < gap or calcDistance(neighborWall[2:4], loopWall[2:4]) < gap:
if calcDistance(neighborWall[:2], loopWall[2:4]) < gap:
exteriorWallLoop.append(neighborWall)
visitedMask[neighborWallIndex] = True
break
elif calcDistance(neighborWall[2:4], loopWall[2:4]) < gap:
neighborWall[0], neighborWall[2] = neighborWall[2], neighborWall[0]
neighborWall[1], neighborWall[3] = neighborWall[3], neighborWall[1]
exteriorWallLoop.append(neighborWall)
visitedMask[neighborWallIndex] = True
break
continue
continue
exteriorWallLoops.append(exteriorWallLoop)
continue
for exteriorWallLoop in exteriorWallLoops:
poly = EggPolygon()
floorGroup.addChild(poly)
poly.setTexture(self.floorMat.getEggTexture())
poly.setMaterial(self.floorMat.getEggMaterial())
for wallIndex, wall in enumerate(exteriorWallLoop):
if wallIndex == 0:
v = EggVertex()
v.setPos(Point3D(1 - wall[0], wall[1], 0))
v.setUv(Point2D(wall[0] * self.maxDim / self.width, 1 - wall[1] * self.maxDim / self.height))
poly.addVertex(vp.addVertex(v))
else:
v = EggVertex()
v.setPos(Point3D(1 - (wall[0] + exteriorWallLoop[wallIndex - 1][2]) / 2, (wall[1] + exteriorWallLoop[wallIndex - 1][3]) / 2, 0))
v.setUv(Point2D((wall[0] + exteriorWallLoop[wallIndex - 1][2]) / 2 * self.maxDim / self.width, 1 - (wall[1] + exteriorWallLoop[wallIndex - 1][3]) / 2 * self.maxDim / self.height))
poly.addVertex(vp.addVertex(v))
pass
if wallIndex == len(exteriorWallLoop) - 1:
v = EggVertex()
v.setPos(Point3D(1 - wall[2], wall[3], 0))
v.setUv(Point2D(wall[2] * self.maxDim / self.width, 1 - wall[3] * self.maxDim / self.height))
poly.addVertex(vp.addVertex(v))
pass
continue
continue
ceilingGroup = EggGroup('ceiling')
data.addChild(ceilingGroup)
vp = EggVertexPool('ceiling_vertex')
ceilingGroup.addChild(vp)
for exteriorWallLoop in exteriorWallLoops:
poly = EggPolygon()
ceilingGroup.addChild(poly)
poly.setTexture(self.ceilingMat.getEggTexture())
poly.setMaterial(self.ceilingMat.getEggMaterial())
for wallIndex, wall in enumerate(exteriorWallLoop):
if wallIndex == 0:
v = EggVertex()
v.setPos(Point3D(1 - wall[0], wall[1], self.wallHeight))
v.setUv(Point2D(wall[0], 1 - wall[1]))
poly.addVertex(vp.addVertex(v))
else:
v = EggVertex()
v.setPos(Point3D(1 - (wall[0] + exteriorWallLoop[wallIndex - 1][2]) / 2, (wall[1] + exteriorWallLoop[wallIndex - 1][3]) / 2, self.wallHeight))
v.setUv(Point2D((wall[0] + exteriorWallLoop[wallIndex - 1][2]) / 2, 1 - (wall[1] + exteriorWallLoop[wallIndex - 1][3]) / 2))
poly.addVertex(vp.addVertex(v))
pass
if wallIndex == len(exteriorWallLoop) - 1:
v = EggVertex()
v.setPos(Point3D(1 - wall[2], wall[3], self.wallHeight))
v.setUv(Point2D(wall[2], 1 - wall[3]))
poly.addVertex(vp.addVertex(v))
pass
continue
continue
return
=> Setting up data loader
=> Loading trainer
=> Training epoch # 1
/root/torch/install/bin/luajit: /root/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] /root/torch/install/share/lua/5.1/cv/imgproc/init.lua:1837: 'struct TensorPlusInt' has no member named 'val'
stack traceback:
/root/torch/install/share/lua/5.1/cv/imgproc/init.lua:1837: in function 'connectedComponents'
../util/lua/floorplan_utils.lua:311: in function 'findConnectedComponents'
../util/lua/floorplan_utils.lua:5819: in function 'getSegmentation'
./models/heatmap-segmentation-dataloader.lua:71: in function <./models/heatmap-segmentation-dataloader.lua:64>
[C]: in function 'xpcall'
/root/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
/root/torch/install/share/lua/5.1/threads/queue.lua:65: in function </root/torch/install/share/lua/5.1/threads/queue.lua:41>
[C]: in function 'pcall'
/root/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
[string " local Queue = require 'threads.queue'..."]:15: in main chunk
stack traceback:
[C]: in function 'error'
/root/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob'
./models/heatmap-segmentation-dataloader.lua:127: in function '(for generator)'
./models/heatmap-segmentation-train.lua:65: in function 'train'
main.lua:69: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x562049049570
Hi, I've been making a prediction using the following command "python train.py --task=test"
below is a partial output:
Optimization information Optimal -234.7307451974677
wall 17 [0, 2] [[200, 9], [199, 51]] dict_keys([]) dict_keys([55, 56, 57, 58, 59, 60])
wall 23 [1, 3] [[115, 85], [115, 107]] dict_keys([55, 56, 57, 60, 142, 143, 144, 146, 150, 151, 152, 153]) dict_keys([11, 12, 14, 70, 71, 73, 147, 148, 149])
wall 24 [0, 8] [[242, 107], [243, 143]] dict_keys([]) dict_keys([19, 20, 27, 28, 30, 31, 76, 77, 79, 80, 90, 167, 168])
.........
I would like to understand the output above, I need a file like the representation prediction represented below:
81 162 81 188 door 1 1
81 67 81 100 door 1 1
15 228.666666667 77 228.666666667 door 1 1
19 41.5 33 41.5 door 1 1
87 41 116 41 door 1 1
97 228.666666667 148 228.666666667 door 1 1
47 387 115 387 door 1 1
49 104.5 75 104.5 door 1 1
52 41 71 41 door 1 1
Can you help me, please?
Thanks,
Giovanni.
Do you mind if I use the vector outputs of your system as a dataset for my own paper? Naturally I will reference yours.
Also regarding that, what is the scale of the numbers indicating the positions of the wall endpoints?
I want to use these buildings to simulate WiFi signals, so it would be useful to know the pixel scale in meters if you have that.
Hi.
There seems to be an error when running the renderer.py file. This is the output
Traceback (most recent call last): File "renderer.py", line 181, in <module> renderer.loadModels(['/home/simon/Desktop/FloorplanTransformation/data/floorplan_representation/00/0b/0513fae730eaf65b98f9580d024d/0001', ]) File "renderer.py", line 78, in loadModels floorplan.read() File "/home/simon/Desktop/FloorplanTransformation/rendering/floorplan.py", line 130, in read item[pointIndex * 2 + 0] /= self.maxDim AttributeError: Floorplan instance has no attribute 'maxDim'
maxDim seems to be causing the problem although it's defined on line 101 at floorplan.py
self.maxDim = max(self.width, self.height)
.
Any ideas on what might be causing this? @art-programmer
hi dude, could u explain how to define the dump1 and dump2 in the dataset? went through your code, it's hard to understand everything without comments
cheers
Hi chenliu ,how to make prediction on a floorplan image with PyTorch version???
...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] ../util/lua/floorplan_utils.lua:5766: attempt to index local 'representation' (a nil value)
stack traceback:
../util/lua/floorplan_utils.lua:5766: in function 'getSegmentation'
./models/heatmap-segmentation-dataloader.lua:71: in function <./models/heatmap-segmentation-dataloader.lua:64>
[C]: in function 'xpcall'
...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
/home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:41>
[C]: in function 'pcall'
/home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
[string " local Queue = require 'threads.queue'..."]:15: in main chunk
stack traceback:
[C]: in function 'error'
...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob'
./models/heatmap-segmentation-dataloader.lua:127: in function '(for generator)'
./models/heatmap-segmentation-train.lua:65: in function 'train'
main.lua:69: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
Hi , it will work for the Geforce 720M with CUDA 10.00.. or where I can train and build the model ??
Hi! Is there a way to run a single prediction on the PyTorch implementation similar to the one with Lua?
I've attempted to benchmark the model using evaluate.lua script, but when I try to run the evaluate command, I get the error down below. I have installed all the libraries/packages as according to the links in the readme except opencv from VisionLabs (https://github.com/VisionLabs/torch-opencv) and lunatic python from OddSource (https://github.com/OddSource/lunatic-python). The lunatic python linked in the readme does not compile. Does anyone know what is going wrong?
/home/joacho/torch/install/bin/luajit: /home/joacho/torch/install/share/lua/5.1/trepl/init.lua:389: /home/joacho/torch/install/share/lua/5.1/trepl/init.lua:389: module 'python' not found:No LuaRocks module found for python
no field package.preload['python']
no file '../util/lua/python.lua'
no file '/home/joacho/.luarocks/share/lua/5.1/python.lua'
no file '/home/joacho/.luarocks/share/lua/5.1/python/init.lua'
no file '/home/joacho/torch/install/share/lua/5.1/python.lua'
no file '/home/joacho/torch/install/share/lua/5.1/python/init.lua'
no file './python.lua'
no file '/home/joacho/torch/install/share/luajit-2.1.0-beta1/python.lua'
no file '/usr/local/share/lua/5.1/python.lua'
no file '/usr/local/share/lua/5.1/python/init.lua'
no file '/home/joacho/.luarocks/lib/lua/5.1/python.so'
no file '/home/joacho/torch/install/lib/lua/5.1/python.so'
no file '/home/joacho/torch/install/lib/python.so'
no file './python.so'
no file '/usr/local/lib/lua/5.1/python.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/joacho/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
evaluate.lua:6: in main chunk
[C]: in function 'dofile'
...acho/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
Hi,
I compared the model performance of both pytorch and lua.
Pytorch model is trained 30 epochs while lua model is provided by the author.
The same training data is used.
However, when I test the following image, 33 walls were detected by lua model while only 11 were detected by pytorch model.
lua output:
(in which 33 walls are detected)
222 121 222 144 door 1 1
128 161 128 210 door 1 1
232 77 232 102 door 1 1
155 96 155 143 door 1 1
186 68 186 83 door 1 1
41 67 41 115 door 1 1
45 48 97 48 door 1 1
242 48 259 48 door 1 1
46 227 96 227 door 1 1
159 48 178 48 door 1 1
17 145 40 145 door 1 1
59 145 110 145 door 1 1
159 147 181 147 door 1 1
141 227 190 227 door 1 1
188 50 201 61 washing_basin 1 1
212 50 229 66 washing_basin 2 1
203 173 220 226 cooking_counter 1 1
234 51 273 73 bathtub 1 1
200 121 220 145 entrance 1 1
164 52 175 71 toilet 1 1
2 71.5 52 91.5 closet 1 1
73 85.5 123 105.5 bedroom 1 1
145 60 195 80 restroom 1 1
188.14285714286 82.247619047771 238.14285714286 102.24761904777 washing_room 1 1
166.52380952371 113.07619047614 216.52380952371 133.07619047614 corridor 1 1
227.5 65.5 277.5 85.5 bathroom 1 1
2 120 52 140 closet 1 1
46 175.5 96 195.5 bedroom 1 1
148.54046242775 177.54046242775 198.54046242775 197.54046242775 kitchen 1 1
203 117.666666667 222 117.666666667 wall 1 1
273.5 47.8666666667 273.5 104 wall 1 1
232 104 273.5 104 wall 1 1
232 104 232 117.666666667 wall 1 1
222 117.666666667 232 117.666666667 wall 1 1
185.5 47.8666666667 185.5 93 wall 1 1
155.333333333 93 185.5 93 wall 1 1
222 156.5 222 227.333333333 wall 1 1
128 227.333333333 222 227.333333333 wall 1 1
197.5 156.5 222 156.5 wall 1 1
197.5 146.733333333 197.5 156.5 wall 1 1
14 227.333333333 128 227.333333333 wall 1 1
14 144.6 14 227.333333333 wall 1 1
13.6666666667 47.6666666667 13.6666666667 116 wall 1 1
13.6666666667 47.6666666667 41.3333333333 47.6666666667 wall 1 1
232 47.8666666667 273.5 47.8666666667 wall 1 1
222 117.666666667 222 146.733333333 wall 1 1
185.5 47.8666666667 232 47.8666666667 wall 1 1
155.333333333 47.8666666667 185.5 47.8666666667 wall 1 1
232 47.8666666667 232 104 wall 1 1
41.3333333333 47.8666666667 155.333333333 47.8666666667 wall 1 1
41.3333333333 47.8666666667 41.3333333333 116 wall 1 1
128 144.6 155.333333333 144.6 wall 1 1
128 144.6 128 227.333333333 wall 1 1
41.3333333333 144.6 128 144.6 wall 1 1
155.333333333 47.8666666667 155.333333333 93 wall 1 1
197.5 146.733333333 222 146.733333333 wall 1 1
155.333333333 146.733333333 197.5 146.733333333 wall 1 1
222 146.733333333 222 156.5 wall 1 1
41.3333333333 116 41.3333333333 144.6 wall 1 1
13.6666666667 116 41.3333333333 116 wall 1 1
155.333333333 93 155.333333333 146.733333333 wall 1 1
13.6666666667 144.6 41.3333333333 144.6 wall 1 1
13.6666666667 116 13.6666666667 144.6 wall 1 1
pytorch output:
(in which only 11 walls are detected)
256 256
11
12.449275362318842 208.3768115942029 116.59958071278825 208.3768115942029 3 0
37.315602836879435 107.09478021978023 37.315602836879435 135.16484142914493 0 6
203.3 135.16484142914493 203.3 208.3768115942029 0 2
116.59958071278825 135.16484142914493 203.3 135.16484142914493 0 2
116.59958071278825 135.16484142914493 116.59958071278825 208.3768115942029 2 3
37.315602836879435 135.16484142914493 116.59958071278825 135.16484142914493 0 3
116.59958071278825 208.3768115942029 203.3 208.3768115942029 2 0
12.449275362318842 107.09478021978023 37.315602836879435 107.09478021978023 0 6
12.449275362318842 135.16484142914493 37.315602836879435 135.16484142914493 6 3
12.449275362318842 107.09478021978023 12.449275362318842 135.16484142914493 6 0
12.449275362318842 135.16484142914493 12.449275362318842 208.3768115942029 3 0
142 135.0 166 135.0 door 1 1
15 135.0 36 135.0 door 1 1
53 135.0 102 135.0 door 1 1
116.0 147 116.0 193 door 1 1
12.0 161 12.0 179 door 1 1
42 208.0 88 208.0 door 1 1
130 208.0 173 208.0 door 1 1
213 46 249 67 bathtub 1 1
186 159 201 207 cooking_counter 1 1
149 46 162 64 toilet 1 1
185 109 201 134 entrance 1 1
173 45 186 58 washing_basin 1 1
191 46 209 61 washing_basin 1 1
While the corners are all predicted correctly, the walls are predicted poorly.
Since the performance of pytorch model is not checked, is it a bad model or something wrong with IP ? or should I train more epoch ?
Thank you very much for publishing such an excellent work! I would like to obtain the original R2V JPG format image, thank you very much!
The URL link of Google Drive needs to apply for access permission to download the training model
Hello guys,
I had requested your pre-trained model from the google drive access link. However, I got no responses.
Is it possible to get pre-trained model which was used to trained this pipeline?
Thanks
|`-> (2): nn.Sequential {
| [input -> (1) -> (2) -> (3) -> output]
| (1): nn.Narrow
| (2): nn.Transpose
| (3): nn.View(-1, 13)
| }
`-> (3): nn.Sequential {
[input -> (1) -> (2) -> (3) -> output]
(1): nn.Narrow
(2): nn.Transpose
(3): nn.View(-1, 17)
}
... -> output
}
}
=> Setting up data loader
=> Loading trainer
=> Training epoch # 1
/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:67:
In 9 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 5 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 5 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 5 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 7 module of nn.Sequential:
...h/install/share/lua/5.1/cudnn/SpatialFullConvolution.lua:31: attempt to perform arithmetic on field 'groups' (a nil value)
stack traceback:
...h/install/share/lua/5.1/cudnn/SpatialFullConvolution.lua:31: in function 'resetWeightDescriptors'
...h/install/share/lua/5.1/cudnn/SpatialFullConvolution.lua:105: in function <...h/install/share/lua/5.1/cudnn/SpatialFullConvolution.lua:103>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/ubuntu/torch/install/share/lua/5.1/nn/ConcatTable.lua:11: in function </home/ubuntu/torch/install/share/lua/5.1/nn/ConcatTable.lua:9>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
...
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
./models/heatmap-segmentation-train.lua:71: in function 'train'
main.lua:69: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/ubuntu/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/ubuntu/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
./models/heatmap-segmentation-train.lua:71: in function 'train'
main.lua:69: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
(python2) ubuntu@ip-172-31-31-13:~/FloorplanTransformation/code$
Hi ,
i want to use the torch version with IP.py instead of QP.py because the pytorch version gives many prediction errors even with 1000 epoch.
If someone could help me with that.
thanks
Hey, the DRN weights in pytorch/models/drn.py
for the webroot
are not available anymore. According to the issue in the original DRN repo it is now 'http://dl.yf.io/drn/'
Are the default hyperparameters (command line arguments) optimal for the model or is there some other hyperparameter set I should use to produce the papers results? The batch size is only 8 because of GPU limitations. Can the batch size be the problem?
Hello.
I'm an engineer and now making an application using machine vision.
I was looking for how to recognize floor plan images and finally found out this project for my work and I'm really interested in using it.
But my question is what open source license does this project have?
Could I use this source code in our Product or only for academic use?
Thanks in advance.
~/torch/install/bin/luajit: ../util/lua/floorplan_utils.lua:2176: bad argument #2 to 'narrow' (out of range at ~/torch/pkg/torch/lib/TH/generic/THTensor.c:438)
stack traceback:
[C]: in function 'narrow'
../util/lua/floorplan_utils.lua:2176: in function 'drawRepresentationImage'
evaluate.lua:65: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.