frc5024 / infiniterecharge Goto Github PK
View Code? Open in Web Editor NEWThe source and tooling behind FRC team 5024's 2020 competition robot
Home Page: http://cs.5024.ca/InfiniteRecharge/
License: MIT License
The source and tooling behind FRC team 5024's 2020 competition robot
Home Page: http://cs.5024.ca/InfiniteRecharge/
License: MIT License
Trajectory following needs to be added to the robot.
This tutorial has the details on how to do this.
https://docs.wpilib.org/en/latest/docs/software/examples-tutorials/trajectory-tutorial/index.html
Some data is required from the drive base before this project can be started
Tiet would like the robot to vibrate the driver controller whenever the robot is in an optimal scoring position and the superstructure is stowed. This way, they can drive near the goal, get notified, press a button, and the robot automatically aligns, spins up, and waits for the operator to press the "shoot" button.
lime light needs to be updated to the version 2020.1
The limelight will need to be updated to the 2020 firmware. Not sure when this will be released, but once it is, follow this guide to update the Limelight:
http://docs.limelightvision.io/en/latest/getting_started.html#imaging
Check here for new versions (I think we are currently running 2019.7):
http://docs.limelightvision.io/en/latest/software_change_log.html
We need to start stubbing out all our subsystems. They don't have to be registered yet, just make them classes, and make them singletons.
Here is a list of each subsystem, and if it has been stubbed yet
Robot component | Subsystem name | Already stubbed? |
---|---|---|
Drivebase | DriveTrain | YES |
Intake | Intake | NO |
Control panel manipulator | PanelManipulator | YES ( see #33 ) |
Climber | Climber | NO |
I have left out the ball shooter until we have a more finalized design for it.
Please create stubs for each of the new ones, and make a PR for us to start working from for other tasks.
I think we can improve the janky autonomous pathing with the help of pure pursuit.
I'll try to implement a controller
wpilibsuite/PathWeaver still has not been updated for 2020. We will have to generate trajectories manually (not a big deal, but it is nice to have a tool help us).
@wm-c @rsninja722 Do you think frc5024/PointPlanner is ready for use to generate waypoints for now?
We need a shuffleboard tab with options to configure Autonomous
When moving the robot forwards 100cm, it will register ~75cm.
I think this is an issue in out PPR calculation
I wrote them all in a rush to get a demo working. They now need documentation, and cleanup
We are having weird issues with the top sensor on the climber. Can someone check to see what value it is reporting when not tripped?
Add controller (Talon SRX for now)
JRADController takes in voltage.
Methods Needed:
I think we can hook into the Limelight's REST API to allow config upload and download via gradle.
Im going to stub out drive code, just so we can have a drivetrain moving early
Version 2020.1.1 has an issue with PathWeaver, where path generation is broken. We should update to 2020.1.2, as this has been fixed.
On the control panel, there are 4 different colours that appear twice each. To detect which colour is currently under the Control Panel Sensor, we are using the Rev Robotics Color Sensor V3.
This sensor returns 4 channel RGBA( ( 0-255 ), ( 0-255 ), ( 0,255 ), ( 0-255 ) ) and a proximity value ( 0- 10).
The sensor also has an LED that can be toggled. It is a white LED that shines on the object/colour it is facing. Find out if this LED does any good or just makes things harder.
Get outputs from the sensor and use its data along with the colour comparison utility to detect which color is on the wheel.
This is being worked on in color-sensor branch
One of the first things we need to do, is get the robot's drivebase moving. This issue plays a key part in the process.
This issue is written a bit like a tutorial, just to make sure everyone knows what they are doing. Future issues will be less hand-hold-y
Here is a quick outline of all that needs to get done:
This is just a reminder to make, and switch to a new branch of the project before starting work. If you do not know how to do this, ask a returning team member.
All of the work outlined in this issue is to be done in the DriveTrain class located at src/main/java/frc/robot/subsystems/DriveTrain.java
. We will need a RobotLogger instance for the DriveTrain (I forgot to add it when setting up the project). Just create a public static object of RobotLogger
at the top of the DriveTrain source. It should look something like this:
public class DriveTrain extends SubsystemBase {
private RobotLogger logger = RobotLogger.getInstance();
...
We will talk about autonomous control a little later (probably week 2), it will require the DriveTrain to behave a little differently than when the drivers are controlling the robot. Because of this, we will have to give the DriveTrain a way to know which "mode" it is running in. We can do this with a simple enum.
For now, I think well only need to define 2 modes:
These modes can be defined with the following enum. Remember to comment the code (please). For the new members, this should be defined above the class constructor (private DriveTrain()
)
/**
* Drive control modes
*/
public enum DriveMode {
OPEN_LOOP, // Open loop control (percent output control)
VOLTAGE // Voltage control
}
We will also want a private variable just to keep track of the current mode
// Keep track of the current DriveMode
private DriveMode m_currentDriveMode = DriveMode.OPEN_LOOP;
Before starting, we will need another private variable to keep track of the current DriveSignal. Name it m_currentSignal
.
We need a method to actually specify our inputs. We'll start with the Open-Loop input. Voltage control is very similar. The idea of these methods is,
Ill write the Open-Loop controller as an example
/**
* Set the Open loop control signal. The values of this signal should be in the
* rage of [-1.0-1.0]
*
* @param signal Open loop signal
*/
public void setOpenLoop(DriveSignal signal) {
// Force-set the mode if not already set
if (m_currentDriveMode != DriveMode.OPEN_LOOP) {
// Enable motor brakes
setBrakes(true);
// Log the state change
logger.log("DriveTrain", String.format("Set control mode to OPEN_LOOP with signal: %s", signal.toString()));
// Set the new state
m_currentDriveMode = DriveMode.OPEN_LOOP;
}
// Set the current DriveTrain signal
m_currentSignal = signal;
}
Make sure to also add a method called setVoltage
. It should be similar, except all references to OPEN_LOOP
should be changed to VOLTAGE
, and the brakes value should be false
.
Make sure to update the javadoc at the top. The setVoltage
method will take values from -12.0
to 12.0
.
WPILib also requires a slightly different method. Just overload setVoltage
(create another method with the exact same name) and make it take in two double
values. called "left" and "right". With these values, create a DriveSignal
object, and pass it into the other setVoltage method. This can be done in one line with
setVoltage(new DriveSignal(left, right));
Remember: Comment, and add a javadoc to each method
We need a method stub for ramp rate control. Just call it setRampRate
, and make it take a double
. Also, add a comment that says something like TODO: Method stub
. We will fill this out once we have sorted out our motor controllers.
Add a method called stop
, and make it call setOpenLoop
with a DriveSignal of zero (new DriveSignal(0,0)
)
Call this in Robot.java
in the disabledInit
method near the bottom. Just call m_driveTrain.stop();
(and add a comment describing why it's there).
We need a way to let the drivers control the bot. I have already stubbed out a drive
method. Just add the following logic for me:
// Square inputs
speed = InputUtils.scale(speed, ScalingMode.SQUARED);
rotation = InputUtils.scale(rotation, ScalingMode.SQUARED);
// Compute a DriveSignal from inputs
DriveSignal signal = DifferentialDriveCalculation.semiConstCurve(speed, rotation);
// Set the signal
setOpenLoop(signal);
This will square the inputs, then calculate, and set a DriveSignal.
We have no logic to do for each state yet, so just add this statement to the periodic
method:
// Handle motor outputs for each mode
switch (m_currentDriveMode) {
case OPEN_LOOP:
// Set Open loop outputs for motors
// TODO: Set outputs here (reading from m_currentSignal)
break;
case VOLTAGE:
// Set Voltage outputs for motors
// TODO: Set outputs here (reading from m_currentSignal)
break;
default:
// This code should never run, but if it does, we set the mode to OPEN_LOOP, and
// the outputs to 0
setOpenLoop(new DriveSignal(0, 0));
}
Once this is finished, make sure it builds, then open a Pull Request on GitHub from your branch into master. Ill review it, then merge it.
On January 4th, we need to download the latest WPILib, RoboRIO image, and DriverStation. Then generate a new gradle project for the year. After this, we should drop in the latest Lib5K.
My laptop apparently has issues building gradle wrappers, so a windows user should do this to save us some headache. Ill handle Lib5K
Create a new class (singleton) at frc.robot.GameData
with methods for dealing with game state information.
It should expose the following methods:
public GameStage getGameStage();
This enum should have the following:
Figure out which stage we are in:
@slownie and @catarinaburghi want a camera to be aimed up at the climber for alignment during endgame.
This should
In simulation, the robot voltage is defaulted to 0 volts, not 12. This can be fixed by adding a method to RR_HAL called something like getSimSafeVoltage
that does some logic like this:
# Return 12 volts if in sim, and voltage is currently reported as 0v
if isSimulation and robotVoltage == 0.0:
return 12.0
# Otherwise, return the real voltage
return robotVoltage
wpilibsuite/allwpilib@bc159a9 fixes this issue, but will not be published till the end of the week.
We want to be like 254: https://media.team254.com/resources/Team_254_Tech_Binder_2019.pdf
When you are finished designing your subsystem, add your documentation to the docs/technicalbinder folder.
Add PyFRC-style vision simulation so we can test alignment in fieldsim.
Best idea currently:
intake with rollers at front, seperate into 2 sides
goes through bot on an angle to height of low port
stop balls with barrier until output
belts on top and bottom of tubes going through
We will want to publish a javadoc tor this repo. Preferably triggered by a GitHub action, and published to it's own branch. This should be done after #3 is complete
theoretical code for intake subsystem
This repo should have CI pipelines. This should wait for a Gradle project to be generated first
The SendableBase
class is now deprecated. The Lib5K components that use it must be updated to use the new SendableRegistry.
The effected classes are:
Related to #103
Need to add tech doc for:
For the first week or two of build season, we will be using MiniBot to drive prototyping. Make sure it's RIO is up to date.
Before imaging, you will need the latest NI tools installed.
Download link:
https://www.ni.com/en-ca/support/downloads/drivers/download.frc-game-tools.html#333285
Installation instructions:
https://docs.wpilib.org/en/latest/docs/getting-started/getting-started-frc-control-system/frc-game-tools.html
You will also need to install the Phoenix framework. Follow the instructions for "Option 1":
https://phoenix-documentation.readthedocs.io/en/latest/ch05_PrepWorkstation.html#what-to-download-and-why
A guide on imaging a RoboRIO can be found here: https://docs.wpilib.org/en/latest/docs/getting-started/getting-started-frc-control-system/imaging-your-roborio.html
We use the Phoenix framework to interface with motor controllers. This must be re-installed every time the RIO is updated.
Follow the instructions (and skip anything mentioning LabIEW) here to set it up: https://phoenix-documentation.readthedocs.io/en/latest/ch06_PrepRobot.html
We should have a CODEOWNERS file for this project
Can someone make sure the build team gets a proper vision target built in the first week?
Just make sure it gets done, then make a simple Limelight profile to find the target for me.
Make sure the limelight has a well-defined image of the shooter goal and isn't affected by the room lights.
Make sure "press button to aim" works fine and test it after limelight.
Im thinking we have a Limelight mounted on the top of the robot, and an MS webcam on the front, or facing the front.
For the limelight, we could use a servo to be able to flip it from "level" to "raised", so it can be used as a driver camera when it is facing the front, then switch to vision mode when facing upwards.
The MS cam would be fixed facing forwards as a "POV Intake cam"
If we cannot get a servo to work, we should mount the Limelight in a fixed position angled up for vision, and as a POV cam for climb.
Can someone work with build on this, and make sure the cameras are mounted well? We may need to play with the Limelight's placement to get it "just right"
If we have a servo, we will need some code (maybe a Vision
subsystem) with methods like:
void setCameraMode( <driver or vision> );
I looked over the generated output code to see how it actually works. As far as I can tell, the settings that need to be changed are:
encoderPPR
should be 1440
not 360
(this will need to be changed again when we get new encoders)@rsninja722 seems to use a different locale than the rest of us. This causes this error and others in files he has worked in. Can someone clean it all up?
/github/workspace/src/main/java/frc/lib5k/components/limelight/Limelight.java:143: error: unmappable character (0x99) for encoding US-ASCII
* Sets limelight???s streaming mode
Here is a link to a CI build log with a list of every time this error has occured:
https://hastebin.com/raw/zonewesiqo
The errors are reported at the very bottom of this file
We need a climber on this robot.
Talk to build members and get all the specifications for said climber.
All of MiniBot's CAN devices will need a firmware update. Grab the latest firmware zip files for each device:
PDP (Only get this if the latest version is above 1.40):
http://www.ctr-electronics.com/pdp.html#product_tabs_technical_resources
PCM (Only get this if the latest version is above 1.65. Make sure to get the FRC version):
http://www.ctr-electronics.com/pcm.html#product_tabs_technical_resources
Once all firmware files are downloaded, open up phoenix tuner (follow installation instructions in #6 ) , and go to the list of devices (while plugged in to the RIO). Update each device.
Follow this guide:
https://phoenix-documentation.readthedocs.io/en/latest/ch08_BringUpCAN.html#field-upgrade-devices
Can we reverse the intake to put the balls back on the field possibly for another robot to pickup?
We are running into an issue where the robot behaves incorrectly in various ways during aoutonomous. These are some of the behaviours I have observed:
Targeting the shooter and hopper movement should work well together.
Comp bot's radio (router) must be updated to the latest firmware.
A complete guide on flashing the radio can be found here: https://docs.wpilib.org/en/latest/docs/getting-started/getting-started-frc-control-system/radio-programming.html
Make sure to use the following settings for the radio:
Team: 5024
WPA Key: raiderrobotics
Radio: OpenMesh
Mode: 2.4GHz Access Point
Robot Name: CarterIsChaotic
Firewall: <un-checked>
BW Limit: <checked>
Add the State Machine Diagrams to the draw.io file.
if you are not added feel free to message me to be added.
Stub the shooter and make it a singleton
This issue is to keep track of progress of making the ColorSensorV3 do what we want it to do.
If anyone has ideas for things to add to the ColorSensor code or the PanelManipulator in general, talk about it here.
Tiet has requested an interface that will let us send motor commands to specific motors via driverstation. We can probably use LiveWindow?
Create paths in for the autonomous to follow.
Use the following criteria to determine what the autonomous will do:
We will should also look into using shuffleboard to pass in augments.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.