首页 课程资源 专业指导     /    深度强化学习DQN系列论文

深度强化学习DQN系列论文

上传者: weixin_43333326 | 上传时间:2024/6/6 11:12:06 | 文件大小:69.27MB | 文件类型:RAR
深度强化学习DQN系列论文
深度强化学习系列论文,包括最基础的DQN,DQN模型改进,DQN算法改进,分层DRL,基于策略梯度的深度强化学习等等,论文基本源自顶会

文件下载

资源详情

[{"title":"(51个子文件69.27MB)深度强化学习DQN系列论文","children":[{"title":"DQN算法改进","children":[{"title":"DynamicFrameskipDeepQNetwork.pdf <span style='color:#111;'>588.35KB</span>","children":null,"spread":false},{"title":"IncreasingtheActionGapNewOperatorsforReinforcementLearning.pdf <span style='color:#111;'>979.22KB</span>","children":null,"spread":false},{"title":"DuelingNetworkArchitecturesforDeepReinforcementLearning.pdf <span style='color:#111;'>672.37KB</span>","children":null,"spread":false},{"title":"LearningtoPlayinaDayFasterDeepReinforcementLearningbyOptimalityTightening.pdf <span style='color:#111;'>1.18MB</span>","children":null,"spread":false},{"title":"SafeandEfficientOff-PolicyReinforcementLearning.pdf <span style='color:#111;'>556.93KB</span>","children":null,"spread":false},{"title":"MassivelyParallelMethodsforDeepReinforcementLearning.pdf <span style='color:#111;'>2.71MB</span>","children":null,"spread":false},{"title":"PrioritizedExperienceReplay.pdf <span style='color:#111;'>1.61MB</span>","children":null,"spread":false},{"title":"Averaged-DQNVarianceReductionandStabilizationforDeepReinforcementLearning.pdf <span style='color:#111;'>920.65KB</span>","children":null,"spread":false},{"title":"DeepReinforcementLearningwithDoubleQ-learning.pdf <span style='color:#111;'>770.57KB</span>","children":null,"spread":false},{"title":"DeepExplorationviaBootstrappedDQN.pdf <span style='color:#111;'>6.56MB</span>","children":null,"spread":false},{"title":"Learningfunctionsacrossmanyordersofmagnitudes.pdf <span style='color:#111;'>803.88KB</span>","children":null,"spread":false},{"title":"ThePredictronEnd-To-EndLearningandPlanning.pdf <span style='color:#111;'>1.74MB</span>","children":null,"spread":false},{"title":"HowtoDiscountDeepReinforcementLearningTowardsNewDynamicStrategies.pdf <span style='color:#111;'>1.02MB</span>","children":null,"spread":false},{"title":"StateoftheArtControlofAtariGamesUsingShallowReinforcementLearning.pdf <span style='color:#111;'>802.04KB</span>","children":null,"spread":false}],"spread":false},{"title":"DQN模型改进","children":[{"title":"HierarchicalDeepReinforcementLearningIntegratingTemporalAbstractionandIntrinsicMotivation.pdf <span style='color:#111;'>1.31MB</span>","children":null,"spread":false},{"title":"StrategicAttentiveWriterforLearningMacro-Actions.pdf <span style='color:#111;'>718.23KB</span>","children":null,"spread":false},{"title":"ProgressiveNeuralNetworks.pdf <span style='color:#111;'>4.08MB</span>","children":null,"spread":false},{"title":"LanguageUnderstandingforText-basedGamesUsingDeepReinforcementLearning.pdf <span style='color:#111;'>597.91KB</span>","children":null,"spread":false},{"title":"RecurrentReinforcementLearningAHybridApproach.pdf <span style='color:#111;'>430.63KB</span>","children":null,"spread":false},{"title":"ValueIterationNetworks.pdf <span style='color:#111;'>525.18KB</span>","children":null,"spread":false},{"title":"DeepRecurrentQ-LearningforPartiallyObservableMDPs.pdf <span style='color:#111;'>823.38KB</span>","children":null,"spread":false},{"title":"MazeBaseASandboxforLearningfromGames.pdf <span style='color:#111;'>394.73KB</span>","children":null,"spread":false},{"title":"ControlofMemory,ActivePerception,andActioninMinecraft.pdf <span style='color:#111;'>7.74MB</span>","children":null,"spread":false},{"title":"DeepAttentionRecurrentQ-Network.pdf <span style='color:#111;'>308.84KB</span>","children":null,"spread":false},{"title":"LearningtoCommunicatetoSolveRiddleswithDeepDistributedRecurrentQ-Networks.pdf <span style='color:#111;'>1000.42KB</span>","children":null,"spread":false}],"spread":false},{"title":"基于策略梯度的深度强化学习","children":[{"title":"DeepReinforcementLearninginParameterizedActionSpace.pdf <span style='color:#111;'>559.33KB</span>","children":null,"spread":false},{"title":"EfficientExplorationforDialoguePolicyLearningwithBBQNetworks&ReplayBufferSpiking.pdf <span style='color:#111;'>657.07KB</span>","children":null,"spread":false},{"title":"CombiningpolicygradientandQ-learning.pdf <span style='color:#111;'>1.19MB</span>","children":null,"spread":false},{"title":"LearningDeepControlPoliciesforAutonomousAerialVehicleswithMPC-GuidedPolicySearch.pdf <span style='color:#111;'>860.74KB</span>","children":null,"spread":false},{"title":"SampleEfficientActor-CriticwithExperienceReplay.pdf <span style='color:#111;'>1.38MB</span>","children":null,"spread":false},{"title":"DeterministicPolicyGradientAlgorithms.pdf <span style='color:#111;'>335.61KB</span>","children":null,"spread":false},{"title":"End-to-EndTrainingofDeepVisuomotorPolicies.pdf <span style='color:#111;'>4.51MB</span>","children":null,"spread":false},{"title":"TrustRegionPolicyOptimization.pdf <span style='color:#111;'>1000.39KB</span>","children":null,"spread":false},{"title":"Continuouscontrolwithdeepreinforcementlearning.pdf <span style='color:#111;'>648.14KB</span>","children":null,"spread":false},{"title":"CompatibleValueGradientsforReinforcementLearningofContinuousDeepPolicies(1).pdf <span style='color:#111;'>1.04MB</span>","children":null,"spread":false},{"title":"InteractiveControlofDiverseComplexCharacterswithNeuralNetworks.pdf <span style='color:#111;'>882.15KB</span>","children":null,"spread":false},{"title":"Memory-basedcontrolwithrecurrentneuralnetworks.pdf <span style='color:#111;'>677.66KB</span>","children":null,"spread":false},{"title":"CompatibleValueGradientsforReinforcementLearningofContinuousDeepPolicies.pdf <span style='color:#111;'>1.04MB</span>","children":null,"spread":false},{"title":"Q-PropSample-EfficientPolicyGradientwithAnOff-PolicyCritic.pdf <span style='color:#111;'>830.90KB</span>","children":null,"spread":false},{"title":"LearningContinuousControlPoliciesbyStochasticValueGradients.pdf <span style='color:#111;'>834.26KB</span>","children":null,"spread":false},{"title":"ContinuousDeepQ-LearningwithModel-basedAcceleration.pdf <span style='color:#111;'>1.63MB</span>","children":null,"spread":false},{"title":"Terrain-AdaptiveLocomotionSkillsUsingDeepReinforcementLearning.pdf <span style='color:#111;'>8.41MB</span>","children":null,"spread":false},{"title":"GradientEstimationUsingStochasticComputationGraphs.pdf <span style='color:#111;'>433.09KB</span>","children":null,"spread":false},{"title":"BenchmarkingDeepReinforcementLearningforContinuousControl.pdf <span style='color:#111;'>1.17MB</span>","children":null,"spread":false},{"title":"High-DimensionalContinuousControlUsingGeneralizedAdvantageEstimation.pdf <span style='color:#111;'>1.71MB</span>","children":null,"spread":false}],"spread":false},{"title":"分层DRL","children":[{"title":"StochasticNeuralNetworksforHierarchicalReinforcementLearning.pdf <span style='color:#111;'>3.08MB</span>","children":null,"spread":false},{"title":"HierarchicalDeepReinforcementLearningIntegratingTemporalAbstractionandIntrinsicMotivation.pdf <span style='color:#111;'>1.31MB</span>","children":null,"spread":false},{"title":"DeepSuccessorReinforcementLearning.pdf <span style='color:#111;'>2.14MB</span>","children":null,"spread":false},{"title":"HierarchicalReinforcementLearningusingSpatio-TemporalAbstractionsandDeepNeuralNetworks.pdf <span style='color:#111;'>1.15MB</span>","children":null,"spread":false}],"spread":true},{"title":"DQN开山篇","children":[{"title":"PlayingAtariwithDeepReinforcementLearning.pdf <span style='color:#111;'>425.39KB</span>","children":null,"spread":false},{"title":"Human-levelcontrolthroughdeepreinforcementlearning.pdf <span style='color:#111;'>4.39MB</span>","children":null,"spread":false}],"spread":true}],"spread":true}]

评论信息

免责申明

【好快吧下载】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【好快吧下载】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【好快吧下载】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,8686821#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明